Dec 09 16:56:11 crc systemd[1]: Starting Kubernetes Kubelet... Dec 09 16:56:11 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:11 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 16:56:12 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 09 16:56:13 crc kubenswrapper[4853]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 16:56:13 crc kubenswrapper[4853]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 09 16:56:13 crc kubenswrapper[4853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 16:56:13 crc kubenswrapper[4853]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 16:56:13 crc kubenswrapper[4853]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 09 16:56:13 crc kubenswrapper[4853]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.389683 4853 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395736 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395774 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395786 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395795 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395805 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395813 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395824 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395835 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395843 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395853 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395861 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395869 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395877 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395885 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395894 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395903 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395911 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395920 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395928 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395937 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395945 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395953 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395961 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395969 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395980 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.395991 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396001 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396011 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396020 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396029 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396038 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396048 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396056 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396069 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396078 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396087 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396097 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396106 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396114 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396122 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396131 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396143 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396151 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396162 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396171 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396181 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396190 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396199 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396207 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396216 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396224 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396232 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396240 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396248 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396256 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396264 4853 feature_gate.go:330] unrecognized feature gate: Example Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396274 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396283 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396292 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396302 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396311 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396319 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396328 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396336 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396346 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396355 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396363 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396372 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396381 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396389 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.396397 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396571 4853 flags.go:64] FLAG: --address="0.0.0.0" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396593 4853 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396647 4853 flags.go:64] FLAG: --anonymous-auth="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396663 4853 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396678 4853 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396690 4853 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396706 4853 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396728 4853 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396739 4853 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396749 4853 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396760 4853 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396771 4853 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396781 4853 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396791 4853 flags.go:64] FLAG: --cgroup-root="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396800 4853 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396810 4853 flags.go:64] FLAG: --client-ca-file="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396819 4853 flags.go:64] FLAG: --cloud-config="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396829 4853 flags.go:64] FLAG: --cloud-provider="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396839 4853 flags.go:64] FLAG: --cluster-dns="[]" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396853 4853 flags.go:64] FLAG: --cluster-domain="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396862 4853 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396872 4853 flags.go:64] FLAG: --config-dir="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396882 4853 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396893 4853 flags.go:64] FLAG: --container-log-max-files="5" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396905 4853 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396914 4853 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396924 4853 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396934 4853 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396979 4853 flags.go:64] FLAG: --contention-profiling="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396989 4853 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.396999 4853 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397010 4853 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397021 4853 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397033 4853 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397044 4853 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397054 4853 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397063 4853 flags.go:64] FLAG: --enable-load-reader="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397073 4853 flags.go:64] FLAG: --enable-server="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397083 4853 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397095 4853 flags.go:64] FLAG: --event-burst="100" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397105 4853 flags.go:64] FLAG: --event-qps="50" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397115 4853 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397124 4853 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397134 4853 flags.go:64] FLAG: --eviction-hard="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397146 4853 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397155 4853 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397165 4853 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397175 4853 flags.go:64] FLAG: --eviction-soft="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397185 4853 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397194 4853 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397204 4853 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397213 4853 flags.go:64] FLAG: --experimental-mounter-path="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397223 4853 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397232 4853 flags.go:64] FLAG: --fail-swap-on="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397242 4853 flags.go:64] FLAG: --feature-gates="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397254 4853 flags.go:64] FLAG: --file-check-frequency="20s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397264 4853 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397274 4853 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397285 4853 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397295 4853 flags.go:64] FLAG: --healthz-port="10248" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397305 4853 flags.go:64] FLAG: --help="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397314 4853 flags.go:64] FLAG: --hostname-override="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397324 4853 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397334 4853 flags.go:64] FLAG: --http-check-frequency="20s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397344 4853 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397353 4853 flags.go:64] FLAG: --image-credential-provider-config="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397365 4853 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397374 4853 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397385 4853 flags.go:64] FLAG: --image-service-endpoint="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397395 4853 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397405 4853 flags.go:64] FLAG: --kube-api-burst="100" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397414 4853 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397424 4853 flags.go:64] FLAG: --kube-api-qps="50" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397433 4853 flags.go:64] FLAG: --kube-reserved="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397444 4853 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397453 4853 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397463 4853 flags.go:64] FLAG: --kubelet-cgroups="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397472 4853 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397481 4853 flags.go:64] FLAG: --lock-file="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397491 4853 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397501 4853 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397511 4853 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397525 4853 flags.go:64] FLAG: --log-json-split-stream="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397534 4853 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397544 4853 flags.go:64] FLAG: --log-text-split-stream="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397554 4853 flags.go:64] FLAG: --logging-format="text" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397563 4853 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397574 4853 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397583 4853 flags.go:64] FLAG: --manifest-url="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397593 4853 flags.go:64] FLAG: --manifest-url-header="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397667 4853 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397677 4853 flags.go:64] FLAG: --max-open-files="1000000" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397689 4853 flags.go:64] FLAG: --max-pods="110" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397699 4853 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397709 4853 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397718 4853 flags.go:64] FLAG: --memory-manager-policy="None" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397728 4853 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397738 4853 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397748 4853 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397758 4853 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397787 4853 flags.go:64] FLAG: --node-status-max-images="50" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397797 4853 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397807 4853 flags.go:64] FLAG: --oom-score-adj="-999" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397817 4853 flags.go:64] FLAG: --pod-cidr="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397827 4853 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397841 4853 flags.go:64] FLAG: --pod-manifest-path="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397851 4853 flags.go:64] FLAG: --pod-max-pids="-1" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397861 4853 flags.go:64] FLAG: --pods-per-core="0" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397870 4853 flags.go:64] FLAG: --port="10250" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397880 4853 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397890 4853 flags.go:64] FLAG: --provider-id="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397900 4853 flags.go:64] FLAG: --qos-reserved="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397909 4853 flags.go:64] FLAG: --read-only-port="10255" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397919 4853 flags.go:64] FLAG: --register-node="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397928 4853 flags.go:64] FLAG: --register-schedulable="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397939 4853 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397955 4853 flags.go:64] FLAG: --registry-burst="10" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397964 4853 flags.go:64] FLAG: --registry-qps="5" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397974 4853 flags.go:64] FLAG: --reserved-cpus="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397983 4853 flags.go:64] FLAG: --reserved-memory="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.397994 4853 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398005 4853 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398018 4853 flags.go:64] FLAG: --rotate-certificates="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398028 4853 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398037 4853 flags.go:64] FLAG: --runonce="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398046 4853 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398057 4853 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398066 4853 flags.go:64] FLAG: --seccomp-default="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398076 4853 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398086 4853 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398095 4853 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398105 4853 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398115 4853 flags.go:64] FLAG: --storage-driver-password="root" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398125 4853 flags.go:64] FLAG: --storage-driver-secure="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398134 4853 flags.go:64] FLAG: --storage-driver-table="stats" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398144 4853 flags.go:64] FLAG: --storage-driver-user="root" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398154 4853 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398164 4853 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398174 4853 flags.go:64] FLAG: --system-cgroups="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398183 4853 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398200 4853 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398209 4853 flags.go:64] FLAG: --tls-cert-file="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398219 4853 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398236 4853 flags.go:64] FLAG: --tls-min-version="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398245 4853 flags.go:64] FLAG: --tls-private-key-file="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398255 4853 flags.go:64] FLAG: --topology-manager-policy="none" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398264 4853 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398275 4853 flags.go:64] FLAG: --topology-manager-scope="container" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398285 4853 flags.go:64] FLAG: --v="2" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398297 4853 flags.go:64] FLAG: --version="false" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398308 4853 flags.go:64] FLAG: --vmodule="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398320 4853 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.398331 4853 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398541 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398556 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398566 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398576 4853 feature_gate.go:330] unrecognized feature gate: Example Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398585 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398593 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398631 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398640 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398648 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398657 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398666 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398674 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398682 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398691 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398699 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398708 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398716 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398724 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398735 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398744 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398754 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398763 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398772 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398781 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398789 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398798 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398807 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398818 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398828 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398837 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398847 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398856 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398866 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398878 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398887 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398896 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398906 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398916 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398924 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398933 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398941 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398950 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398959 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398968 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398976 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398987 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.398999 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399008 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399016 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399026 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399035 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399043 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399052 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399060 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399068 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399076 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399084 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399093 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399102 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399111 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399119 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399127 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399136 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399145 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399153 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399166 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399178 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399188 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399197 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399206 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.399215 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.399239 4853 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.412944 4853 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.412994 4853 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413151 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413177 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413187 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413197 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413207 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413217 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413228 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413237 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413246 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413255 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413264 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413274 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413284 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413295 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413308 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413320 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413329 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413339 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413349 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413357 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413365 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413374 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413383 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413395 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413407 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413416 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413425 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413438 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413449 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413458 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413467 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413477 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413486 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413495 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413504 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413512 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413521 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413530 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413538 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413547 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413555 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413563 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413571 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413580 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413588 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413638 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413650 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413661 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413672 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413683 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413693 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413704 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413714 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413723 4853 feature_gate.go:330] unrecognized feature gate: Example Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413731 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413741 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413750 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413758 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413766 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413775 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413784 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413793 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413801 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413811 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413820 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413829 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413837 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413845 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413854 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413866 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.413875 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.413889 4853 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414176 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414192 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414200 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414209 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414217 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414226 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414234 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414242 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414251 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414259 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414268 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414276 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414287 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414296 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414306 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414316 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414325 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414335 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414348 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414360 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414369 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414378 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414387 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414395 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414407 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414416 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414426 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414435 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414444 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414452 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414460 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414469 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414477 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414486 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414494 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414503 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414511 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414520 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414528 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414538 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414546 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414555 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414563 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414572 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414580 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414588 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414635 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414644 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414653 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414661 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414669 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414678 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414687 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414695 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414706 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414717 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414726 4853 feature_gate.go:330] unrecognized feature gate: Example Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414736 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414745 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414754 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414762 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414770 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414778 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414787 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414795 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414804 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414812 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414820 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414828 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414837 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.414845 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.414869 4853 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.415338 4853 server.go:940] "Client rotation is on, will bootstrap in background" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.419909 4853 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.420045 4853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.420883 4853 server.go:997] "Starting client certificate rotation" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.420961 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.421149 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-09 02:25:48.190827825 +0000 UTC Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.421310 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.429133 4853 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.430665 4853 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.431769 4853 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.442636 4853 log.go:25] "Validated CRI v1 runtime API" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.466737 4853 log.go:25] "Validated CRI v1 image API" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.468208 4853 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.470709 4853 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-09-16-51-50-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.470739 4853 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.492205 4853 manager.go:217] Machine: {Timestamp:2025-12-09 16:56:13.488446459 +0000 UTC m=+0.423185731 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:66dfaf11-4892-4e38-8caa-0f87e61cbeaf BootID:5d669b96-627f-4105-ba3d-ff7569a6f697 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:dc:22:97 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:dc:22:97 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:75:0e:9c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8d:9c:ab Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:79:68:68 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:19:34:ea Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:af:1e:77:2a:f3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:9c:d1:77:48:f7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.492778 4853 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.492984 4853 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.493403 4853 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.493759 4853 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.493806 4853 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.494184 4853 topology_manager.go:138] "Creating topology manager with none policy" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.494203 4853 container_manager_linux.go:303] "Creating device plugin manager" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.494402 4853 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.494906 4853 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.495308 4853 state_mem.go:36] "Initialized new in-memory state store" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.495438 4853 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.496709 4853 kubelet.go:418] "Attempting to sync node with API server" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.496777 4853 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.496859 4853 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.496881 4853 kubelet.go:324] "Adding apiserver pod source" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.496898 4853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.498318 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.498360 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.498445 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.498484 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.498845 4853 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.499385 4853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.500867 4853 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501709 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501747 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501777 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501791 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501813 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501826 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501839 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501860 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501875 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501889 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501906 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.501922 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.502338 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.502995 4853 server.go:1280] "Started kubelet" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.503235 4853 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.503300 4853 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.503907 4853 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.503954 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:13 crc systemd[1]: Started Kubernetes Kubelet. Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.505149 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.505167 4853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.505252 4853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f9a6d5fdd0124 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 16:56:13.502955812 +0000 UTC m=+0.437695024,LastTimestamp:2025-12-09 16:56:13.502955812 +0000 UTC m=+0.437695024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.505853 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:05:14.484934554 +0000 UTC Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.505908 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 252h9m0.979030396s for next certificate rotation Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.506347 4853 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.506367 4853 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.506748 4853 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.507533 4853 server.go:460] "Adding debug handlers to kubelet server" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.507097 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.508046 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.508121 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.508283 4853 factory.go:55] Registering systemd factory Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.508314 4853 factory.go:221] Registration of the systemd container factory successfully Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.508587 4853 factory.go:153] Registering CRI-O factory Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.508628 4853 factory.go:221] Registration of the crio container factory successfully Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.508702 4853 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.508767 4853 factory.go:103] Registering Raw factory Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.508790 4853 manager.go:1196] Started watching for new ooms in manager Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.506529 4853 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.509661 4853 manager.go:319] Starting recovery of all containers Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520584 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520655 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520671 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520684 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520726 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520739 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520752 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520764 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520776 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520788 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520800 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520812 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520824 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520838 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520852 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520864 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520922 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520936 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520948 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520960 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520969 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520977 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520986 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.520994 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521007 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521017 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521052 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521063 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521072 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521081 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521090 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521121 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521136 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521150 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521161 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521173 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521184 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521223 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521236 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521247 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521260 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521270 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521280 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521290 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521299 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521309 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521319 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521352 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521362 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521372 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521381 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521391 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521405 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521416 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521425 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521435 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.521448 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522106 4853 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522133 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522148 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522162 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522174 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522186 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522198 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522213 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522229 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522241 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522255 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522281 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522305 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522330 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522355 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522378 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522404 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522428 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522452 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522475 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522500 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522523 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522546 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522575 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522638 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522666 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522690 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522716 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522742 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522767 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522792 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522820 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522846 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522871 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522897 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522923 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522949 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.522975 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523000 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523031 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523054 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523079 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523104 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523129 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523161 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523186 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523210 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523236 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523275 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523303 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523330 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523358 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523386 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523426 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523453 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523483 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523507 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523531 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523560 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523586 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523647 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523673 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523697 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523720 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523750 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523776 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523805 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523831 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523858 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523882 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523907 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523936 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523961 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.523985 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524009 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524032 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524057 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524082 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524108 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524133 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524160 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524183 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524207 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524230 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524253 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524279 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524305 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524333 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524357 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524380 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524405 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524426 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524451 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524478 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524502 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524526 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524553 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524579 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524639 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524667 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524728 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524753 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524777 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524844 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524872 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524897 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524924 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524953 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.524979 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525005 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525030 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525054 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525080 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525106 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525130 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525159 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525184 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525210 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525233 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525256 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525274 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525308 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525325 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525341 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525359 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525375 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525392 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525408 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525427 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525443 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525461 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525478 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525496 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525514 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525531 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525551 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525568 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525587 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525695 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525715 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525732 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525748 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525767 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525785 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525810 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525827 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525844 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525866 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525884 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525900 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525920 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525937 4853 reconstruct.go:97] "Volume reconstruction finished" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.525949 4853 reconciler.go:26] "Reconciler: start to sync state" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.538925 4853 manager.go:324] Recovery completed Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.558432 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.561962 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.562022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.562037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.563766 4853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.565187 4853 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.565214 4853 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.565249 4853 state_mem.go:36] "Initialized new in-memory state store" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.565756 4853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.565804 4853 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.565890 4853 kubelet.go:2335] "Starting kubelet main sync loop" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.565946 4853 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 09 16:56:13 crc kubenswrapper[4853]: W1209 16:56:13.567092 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.567170 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.575179 4853 policy_none.go:49] "None policy: Start" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.575989 4853 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.576027 4853 state_mem.go:35] "Initializing new in-memory state store" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.609613 4853 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.630781 4853 manager.go:334] "Starting Device Plugin manager" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.630861 4853 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.630873 4853 server.go:79] "Starting device plugin registration server" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.631334 4853 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.631351 4853 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.631578 4853 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.631654 4853 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.631660 4853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.638560 4853 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.666919 4853 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.667019 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.668690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.668769 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.668795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.669016 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.669270 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.669364 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.670395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.670423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.670434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.670480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.670529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.670552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.670653 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.670930 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.671052 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.671264 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.671294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.671544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.671714 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.671831 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.671872 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672112 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672149 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672419 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672582 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672627 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672673 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672949 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.672957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.673064 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.673088 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.673316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.673340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.673352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.673573 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.673609 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.673620 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.708821 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729278 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729332 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729360 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729376 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729461 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729534 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729557 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729652 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729666 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729705 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729721 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729735 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729816 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.729848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.732102 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.733730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.733799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.733813 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.733882 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.734360 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.831644 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.831760 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.831793 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.831884 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.831853 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.831929 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.831945 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.831994 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832026 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832000 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832089 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832131 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832148 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832185 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832214 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832236 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832243 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832282 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832306 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832318 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832333 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832378 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832366 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832381 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832335 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832456 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832499 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832407 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832553 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.832626 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.934582 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.936054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.936117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.936141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:13 crc kubenswrapper[4853]: I1209 16:56:13.936215 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 16:56:13 crc kubenswrapper[4853]: E1209 16:56:13.936752 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.009714 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.034745 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 16:56:14 crc kubenswrapper[4853]: W1209 16:56:14.057085 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9e0d3d7f96372a00fc483dd60bcd744cee324a5dfcc8973367a19f00ad5f8113 WatchSource:0}: Error finding container 9e0d3d7f96372a00fc483dd60bcd744cee324a5dfcc8973367a19f00ad5f8113: Status 404 returned error can't find the container with id 9e0d3d7f96372a00fc483dd60bcd744cee324a5dfcc8973367a19f00ad5f8113 Dec 09 16:56:14 crc kubenswrapper[4853]: W1209 16:56:14.063666 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0ba2c4349e880636a16efe6ea2428f0ba82651cc96ea0a35c23ee73037d299cc WatchSource:0}: Error finding container 0ba2c4349e880636a16efe6ea2428f0ba82651cc96ea0a35c23ee73037d299cc: Status 404 returned error can't find the container with id 0ba2c4349e880636a16efe6ea2428f0ba82651cc96ea0a35c23ee73037d299cc Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.067006 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.075655 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.079702 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:56:14 crc kubenswrapper[4853]: W1209 16:56:14.096709 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e3d960981cd4d310e1062634d24ca96563cc889e93f60add460d1e09a953e7d1 WatchSource:0}: Error finding container e3d960981cd4d310e1062634d24ca96563cc889e93f60add460d1e09a953e7d1: Status 404 returned error can't find the container with id e3d960981cd4d310e1062634d24ca96563cc889e93f60add460d1e09a953e7d1 Dec 09 16:56:14 crc kubenswrapper[4853]: E1209 16:56:14.110331 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Dec 09 16:56:14 crc kubenswrapper[4853]: W1209 16:56:14.110824 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6c25f0f25ee7ef49c92090704b84d2d7b1638d230ad2facd54563566c12a4f47 WatchSource:0}: Error finding container 6c25f0f25ee7ef49c92090704b84d2d7b1638d230ad2facd54563566c12a4f47: Status 404 returned error can't find the container with id 6c25f0f25ee7ef49c92090704b84d2d7b1638d230ad2facd54563566c12a4f47 Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.336963 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.339330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.339383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.339400 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.339431 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 16:56:14 crc kubenswrapper[4853]: E1209 16:56:14.339948 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Dec 09 16:56:14 crc kubenswrapper[4853]: W1209 16:56:14.372632 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:14 crc kubenswrapper[4853]: E1209 16:56:14.372712 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.505179 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.573294 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89" exitCode=0 Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.573361 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89"} Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.573438 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e3d960981cd4d310e1062634d24ca96563cc889e93f60add460d1e09a953e7d1"} Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.573521 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.574899 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.574935 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.574944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.576078 4853 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071" exitCode=0 Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.576434 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.576160 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071"} Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.576932 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0ba2c4349e880636a16efe6ea2428f0ba82651cc96ea0a35c23ee73037d299cc"} Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.577050 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.577699 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.577724 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.577732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.578661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.578694 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.578704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.579449 4853 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b" exitCode=0 Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.579480 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b"} Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.579534 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9e0d3d7f96372a00fc483dd60bcd744cee324a5dfcc8973367a19f00ad5f8113"} Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.579674 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.580969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.581001 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.581013 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.581365 4853 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba" exitCode=0 Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.581435 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba"} Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.581454 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c25f0f25ee7ef49c92090704b84d2d7b1638d230ad2facd54563566c12a4f47"} Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.581571 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.582560 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.582652 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.582673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.583014 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c"} Dec 09 16:56:14 crc kubenswrapper[4853]: I1209 16:56:14.583044 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7a7a3bc4d7df5d7987806e6f74eba109f8a7771f2c4c5a77b9b3534c8c54f421"} Dec 09 16:56:14 crc kubenswrapper[4853]: W1209 16:56:14.597633 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:14 crc kubenswrapper[4853]: E1209 16:56:14.597716 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:14 crc kubenswrapper[4853]: E1209 16:56:14.911654 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Dec 09 16:56:14 crc kubenswrapper[4853]: W1209 16:56:14.944566 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:14 crc kubenswrapper[4853]: E1209 16:56:14.944685 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:15 crc kubenswrapper[4853]: W1209 16:56:15.074040 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:15 crc kubenswrapper[4853]: E1209 16:56:15.074169 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.140854 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.142812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.142851 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.142863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.142885 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 16:56:15 crc kubenswrapper[4853]: E1209 16:56:15.143318 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.504877 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.510042 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 16:56:15 crc kubenswrapper[4853]: E1209 16:56:15.510888 4853 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.587260 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.587307 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.587321 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.587404 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.588121 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.588149 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.588161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.589646 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.589677 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.589695 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.589722 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.590339 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.590379 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.590388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.591956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.591983 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.592012 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.592022 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.593625 4853 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4" exitCode=0 Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.593677 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.593719 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.594378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.594407 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.594417 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.595756 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a099bf0e0e1e9d623a1334d0923cffae6fe94b736206abeedb078922448ca86f"} Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.595810 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.596439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.596468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:15 crc kubenswrapper[4853]: I1209 16:56:15.596478 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.601654 4853 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641" exitCode=0 Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.601706 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641"} Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.601838 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.602913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.602947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.602959 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.608727 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd"} Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.608800 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.608832 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.610140 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.610174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.610184 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.611492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.611533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.611548 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.743392 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.744622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.744662 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.744675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:16 crc kubenswrapper[4853]: I1209 16:56:16.744698 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 16:56:17 crc kubenswrapper[4853]: I1209 16:56:17.616580 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337"} Dec 09 16:56:17 crc kubenswrapper[4853]: I1209 16:56:17.616630 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720"} Dec 09 16:56:17 crc kubenswrapper[4853]: I1209 16:56:17.616643 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9"} Dec 09 16:56:17 crc kubenswrapper[4853]: I1209 16:56:17.616652 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb"} Dec 09 16:56:17 crc kubenswrapper[4853]: I1209 16:56:17.616646 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 16:56:17 crc kubenswrapper[4853]: I1209 16:56:17.616701 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:17 crc kubenswrapper[4853]: I1209 16:56:17.617778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:17 crc kubenswrapper[4853]: I1209 16:56:17.617836 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:17 crc kubenswrapper[4853]: I1209 16:56:17.617854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.622781 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811"} Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.623182 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.623890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.623907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.623916 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.840888 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.841093 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.842695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.842746 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.842761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:18 crc kubenswrapper[4853]: I1209 16:56:18.865465 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.614329 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.614529 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.616195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.616257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.616279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.625538 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.625630 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.627043 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.627096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.627110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.627135 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.627148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.627160 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.758135 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.796304 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:19 crc kubenswrapper[4853]: I1209 16:56:19.805254 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:20 crc kubenswrapper[4853]: I1209 16:56:20.628232 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:20 crc kubenswrapper[4853]: I1209 16:56:20.629462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:20 crc kubenswrapper[4853]: I1209 16:56:20.629515 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:20 crc kubenswrapper[4853]: I1209 16:56:20.629536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:20 crc kubenswrapper[4853]: I1209 16:56:20.710543 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 09 16:56:20 crc kubenswrapper[4853]: I1209 16:56:20.710761 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:20 crc kubenswrapper[4853]: I1209 16:56:20.711857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:20 crc kubenswrapper[4853]: I1209 16:56:20.711901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:20 crc kubenswrapper[4853]: I1209 16:56:20.711919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.081987 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.082259 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.083817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.083864 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.083877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.332653 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.630834 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.630873 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.635370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.635430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.635460 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.635658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.635738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.635753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.865951 4853 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 16:56:21 crc kubenswrapper[4853]: I1209 16:56:21.866034 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 16:56:23 crc kubenswrapper[4853]: I1209 16:56:23.623059 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 09 16:56:23 crc kubenswrapper[4853]: I1209 16:56:23.623315 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:23 crc kubenswrapper[4853]: I1209 16:56:23.624723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:23 crc kubenswrapper[4853]: I1209 16:56:23.624760 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:23 crc kubenswrapper[4853]: I1209 16:56:23.624770 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:23 crc kubenswrapper[4853]: E1209 16:56:23.638787 4853 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.207047 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.207249 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.208575 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.208647 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.208659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.213306 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.432665 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.432859 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.434068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.434129 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.434148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.640487 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.641359 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.641406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:25 crc kubenswrapper[4853]: I1209 16:56:25.641418 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:26 crc kubenswrapper[4853]: I1209 16:56:26.098216 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 16:56:26 crc kubenswrapper[4853]: I1209 16:56:26.098287 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 16:56:26 crc kubenswrapper[4853]: I1209 16:56:26.104952 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 16:56:26 crc kubenswrapper[4853]: I1209 16:56:26.105036 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 16:56:27 crc kubenswrapper[4853]: I1209 16:56:27.249997 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 09 16:56:27 crc kubenswrapper[4853]: I1209 16:56:27.250092 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 09 16:56:29 crc kubenswrapper[4853]: I1209 16:56:29.615186 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 09 16:56:29 crc kubenswrapper[4853]: I1209 16:56:29.615249 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.086126 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.086343 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.086644 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.086687 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.087752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.087812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.087830 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.089783 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.098994 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.101436 4853 trace.go:236] Trace[1905318245]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 16:56:17.548) (total time: 13552ms): Dec 09 16:56:31 crc kubenswrapper[4853]: Trace[1905318245]: ---"Objects listed" error: 13552ms (16:56:31.101) Dec 09 16:56:31 crc kubenswrapper[4853]: Trace[1905318245]: [13.552784829s] [13.552784829s] END Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.101483 4853 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.101496 4853 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.101439 4853 trace.go:236] Trace[1430539308]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 16:56:16.615) (total time: 14485ms): Dec 09 16:56:31 crc kubenswrapper[4853]: Trace[1430539308]: ---"Objects listed" error: 14485ms (16:56:31.101) Dec 09 16:56:31 crc kubenswrapper[4853]: Trace[1430539308]: [14.485398873s] [14.485398873s] END Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.101638 4853 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.101467 4853 trace.go:236] Trace[1217760822]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 16:56:16.844) (total time: 14256ms): Dec 09 16:56:31 crc kubenswrapper[4853]: Trace[1217760822]: ---"Objects listed" error: 14256ms (16:56:31.101) Dec 09 16:56:31 crc kubenswrapper[4853]: Trace[1217760822]: [14.256544992s] [14.256544992s] END Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.101746 4853 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.103589 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.103887 4853 trace.go:236] Trace[2085707888]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 16:56:17.226) (total time: 13877ms): Dec 09 16:56:31 crc kubenswrapper[4853]: Trace[2085707888]: ---"Objects listed" error: 13875ms (16:56:31.101) Dec 09 16:56:31 crc kubenswrapper[4853]: Trace[2085707888]: [13.877642651s] [13.877642651s] END Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.103938 4853 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.126064 4853 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.507992 4853 apiserver.go:52] "Watching apiserver" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.511053 4853 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.511399 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.511812 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.511889 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.511986 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.512166 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.512205 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.512220 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.512227 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.512274 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.512308 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.513422 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.513867 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.514138 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.514991 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.515687 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.516516 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.516959 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.517113 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.517294 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.568580 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.581402 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.592480 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.602288 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.607504 4853 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.612178 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.620885 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.630850 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.657713 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.660082 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd" exitCode=255 Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.660200 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd"} Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.669534 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.669842 4853 scope.go:117] "RemoveContainer" containerID="2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.672864 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.681737 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.693215 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.702568 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704280 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704334 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704366 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704400 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704432 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704463 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704492 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704522 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704557 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704586 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704677 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704733 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704766 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704798 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704859 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704896 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.704961 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.705536 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.705895 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706182 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706189 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706184 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706334 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706430 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706654 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706748 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706789 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706804 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706831 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706864 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706895 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706894 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706923 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.707020 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.706945 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.707030 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.707094 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.707105 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.707657 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.707708 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.707853 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.707884 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.707963 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708000 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708044 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708163 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708297 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708347 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708387 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708427 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708688 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708716 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708747 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708967 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.709212 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.708965 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.709345 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.710952 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.710962 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711155 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711147 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711147 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711483 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711540 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711722 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711743 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711789 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711846 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.711975 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.712001 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.712100 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.712234 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.712285 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.712321 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.712283 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.712273 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713246 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.712526 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713002 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713053 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713135 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713091 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713658 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713728 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713736 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713772 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713804 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713874 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713895 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713938 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713959 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713977 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.713993 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714029 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714053 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714104 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714185 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714204 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714271 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714441 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714483 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714503 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714359 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714616 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714822 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715015 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715187 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715252 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715309 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715325 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714803 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.714716 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715721 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715774 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715825 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715868 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715902 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715940 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.716016 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.716054 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.716112 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.716155 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.715490 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.716606 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.716689 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.716723 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.716843 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.717004 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.717450 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.717681 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.717920 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.717947 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.718073 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.718164 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.717997 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.718576 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.718700 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.718788 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.718836 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.718865 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.718869 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.716786 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.719423 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.719468 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.719474 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.719557 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.721345 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.721415 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.721516 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.721541 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.721621 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.721668 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.721710 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.721822 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.721890 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722075 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722256 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722320 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722351 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722376 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722476 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722513 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722551 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722619 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722638 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722656 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722690 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722710 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722733 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722754 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722776 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.722799 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:56:32.222772325 +0000 UTC m=+19.157511547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722841 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722881 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722915 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722947 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.722976 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723005 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723035 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723040 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723066 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723098 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723128 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723160 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723190 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723221 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723252 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723253 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723292 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723310 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723327 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723344 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723362 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723386 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723402 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723421 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723445 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723467 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723491 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723532 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723560 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723580 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723615 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723633 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723656 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723677 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723699 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723722 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723744 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723762 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723780 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723823 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723844 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723860 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723878 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723894 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.724355 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.723912 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.725115 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.725497 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.725566 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.725671 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.725857 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.725925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.726208 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.726232 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.726249 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.726384 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.726409 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.726428 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.726847 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.726943 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.726951 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727018 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727019 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727105 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727209 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727233 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727273 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727280 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727296 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727350 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727363 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727412 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727438 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727461 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727490 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727514 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727536 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727558 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727581 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727586 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727622 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727646 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727670 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727700 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727722 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727745 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727767 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727792 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727818 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727867 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727867 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727891 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727915 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727903 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727945 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727952 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.727989 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728121 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728179 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728212 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728236 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728261 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728282 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728308 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728353 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728377 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728400 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728507 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728566 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728663 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728634 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728827 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728873 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728909 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728943 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728978 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.728993 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729043 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729059 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729079 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729098 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729117 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729172 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729204 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729236 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729269 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729306 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729338 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729382 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729413 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729439 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729535 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729476 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729799 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729851 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729931 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729928 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730081 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730120 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730126 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730158 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730268 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730330 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730386 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730416 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730462 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730488 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730514 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730676 4853 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730693 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730731 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730730 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730746 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730796 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.730830 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.731031 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.731060 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.731080 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.731086 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.731150 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.731452 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.731834 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.731972 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.732421 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.729521 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.733276 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.733381 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.733566 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.733663 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.733811 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.734138 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.734170 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.734177 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.734260 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.734555 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.734774 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.734727 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.734864 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.734937 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.735119 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.735154 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.731100 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.735373 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:32.235353803 +0000 UTC m=+19.170092995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.735412 4853 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.735757 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.735937 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.735890 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.736185 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.736495 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.736686 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.736736 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.736775 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.736832 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.736895 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.737344 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.737683 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.738176 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.738563 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.738746 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.738933 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.739087 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.739076 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.739232 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.739270 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.739642 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.739969 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.740035 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.740097 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.740164 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:32.240145755 +0000 UTC m=+19.174885037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.740216 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.740384 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.740520 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.740726 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.740781 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.739974 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.740904 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741404 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741431 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741451 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741470 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741488 4853 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741505 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741521 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741545 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741562 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741579 4853 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741615 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741632 4853 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741650 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741667 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741682 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741700 4853 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741716 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741732 4853 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741749 4853 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741764 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741780 4853 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741802 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741818 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741841 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741856 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741871 4853 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741886 4853 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741901 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741916 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741930 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741946 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741961 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741978 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.741994 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742009 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742024 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742036 4853 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742049 4853 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742061 4853 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742074 4853 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742086 4853 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742098 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742109 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742120 4853 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742157 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742172 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742185 4853 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742196 4853 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742214 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742313 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742325 4853 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742366 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742379 4853 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742391 4853 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742404 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742416 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742428 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742440 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742459 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742477 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742488 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742502 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742514 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742524 4853 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742534 4853 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742545 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742556 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742566 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742576 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742586 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742648 4853 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742659 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742668 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742683 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742693 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742703 4853 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742713 4853 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742723 4853 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742741 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742750 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742760 4853 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742769 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742779 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742791 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742804 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742814 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742823 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742838 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742855 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742866 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742875 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742885 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742895 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742906 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742916 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742941 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742952 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742962 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.742976 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.743741 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.743871 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.744356 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.746582 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.746972 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.750986 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.751181 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.751203 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.751214 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.751257 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:32.251244262 +0000 UTC m=+19.185983444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.751296 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.751305 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.751312 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:31 crc kubenswrapper[4853]: E1209 16:56:31.751335 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:32.251328755 +0000 UTC m=+19.186067927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.755079 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.755130 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.755727 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.755943 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.756101 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.757267 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.757757 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.759681 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.759942 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.759931 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760137 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760309 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760387 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760391 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760451 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760486 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760662 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760886 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760943 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.760994 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.761239 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.761317 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.761332 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.763555 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.764812 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.764925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.764962 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.765214 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.766925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.767216 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.767717 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.770571 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.779255 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.783398 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.785573 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844466 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844569 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844657 4853 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844671 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844712 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844737 4853 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844752 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844805 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844823 4853 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844838 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844854 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844910 4853 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844925 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844940 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.844998 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845094 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845118 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845194 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845211 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845269 4853 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845299 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845316 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845371 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845389 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845476 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845503 4853 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845540 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845554 4853 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845568 4853 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845668 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845689 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845704 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845718 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845765 4853 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845785 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845803 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845847 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845865 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845880 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845918 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845934 4853 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845958 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846001 4853 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846015 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846031 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846041 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846052 4853 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846100 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846112 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846123 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846160 4853 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846171 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846182 4853 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846193 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846203 4853 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846246 4853 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846259 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846270 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846282 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846298 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846341 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846357 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846426 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846453 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846495 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846512 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846530 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846547 4853 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846590 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846631 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846643 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846660 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846676 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846718 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846734 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846748 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846762 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846817 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846827 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846839 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846849 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846859 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846916 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846940 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.846954 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.847005 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.845389 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.847184 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.848827 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 16:56:31 crc kubenswrapper[4853]: W1209 16:56:31.862739 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-554fdee74084423b769e3ec9cb40c10caca87e4cc62e45504a94cda654d1872e WatchSource:0}: Error finding container 554fdee74084423b769e3ec9cb40c10caca87e4cc62e45504a94cda654d1872e: Status 404 returned error can't find the container with id 554fdee74084423b769e3ec9cb40c10caca87e4cc62e45504a94cda654d1872e Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.866638 4853 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 16:56:31 crc kubenswrapper[4853]: I1209 16:56:31.866786 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.124886 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.138329 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 16:56:32 crc kubenswrapper[4853]: W1209 16:56:32.148254 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-54a7cfc992e6f792959d8b0168182e090b202650ed4190eef36d3b8b13b1576d WatchSource:0}: Error finding container 54a7cfc992e6f792959d8b0168182e090b202650ed4190eef36d3b8b13b1576d: Status 404 returned error can't find the container with id 54a7cfc992e6f792959d8b0168182e090b202650ed4190eef36d3b8b13b1576d Dec 09 16:56:32 crc kubenswrapper[4853]: W1209 16:56:32.186295 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-39901c9290d55f25a00a0c40c73105a11c7b79a6e7ca1df9ef60482a33331dc0 WatchSource:0}: Error finding container 39901c9290d55f25a00a0c40c73105a11c7b79a6e7ca1df9ef60482a33331dc0: Status 404 returned error can't find the container with id 39901c9290d55f25a00a0c40c73105a11c7b79a6e7ca1df9ef60482a33331dc0 Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.250249 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.250329 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.250361 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.250423 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:56:33.250388852 +0000 UTC m=+20.185128034 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.250447 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.250510 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:33.250494055 +0000 UTC m=+20.185233287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.250568 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.250744 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:33.250721872 +0000 UTC m=+20.185461114 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.351079 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.351123 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.351233 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.351248 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.351258 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.351277 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.351310 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:33.351296704 +0000 UTC m=+20.286035886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.351318 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.351340 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:32 crc kubenswrapper[4853]: E1209 16:56:32.351420 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:33.351392907 +0000 UTC m=+20.286132129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.664176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"39901c9290d55f25a00a0c40c73105a11c7b79a6e7ca1df9ef60482a33331dc0"} Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.665772 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d"} Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.665860 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"54a7cfc992e6f792959d8b0168182e090b202650ed4190eef36d3b8b13b1576d"} Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.667509 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef"} Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.667536 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c"} Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.667550 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"554fdee74084423b769e3ec9cb40c10caca87e4cc62e45504a94cda654d1872e"} Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.669217 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.670734 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300"} Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.671007 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.694942 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.711679 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.722814 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.733616 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.745071 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.756935 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.767204 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.783693 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.800389 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.815741 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.839845 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.854103 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.868580 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:32 crc kubenswrapper[4853]: I1209 16:56:32.885210 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:32Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.260474 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.260648 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.260765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.260796 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:56:35.260757976 +0000 UTC m=+22.195497198 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.260924 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.261042 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:35.261018343 +0000 UTC m=+22.195757565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.261048 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.261110 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:35.261096856 +0000 UTC m=+22.195836068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.361463 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.361540 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.361764 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.361791 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.361788 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.361855 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.361876 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.361809 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.361970 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:35.361937746 +0000 UTC m=+22.296676968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.362030 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:35.362006027 +0000 UTC m=+22.296745239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.566744 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.566773 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.566890 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.567038 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.567132 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:33 crc kubenswrapper[4853]: E1209 16:56:33.567225 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.571203 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.572051 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.573805 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.574740 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.576343 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.577104 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.577921 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.579189 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.580071 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.581516 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.582402 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.584485 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.585429 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.586472 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.587457 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.588432 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.591000 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.591881 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.593799 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.594896 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.595713 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.597564 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.598407 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.601534 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.602241 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.604102 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.604825 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.605300 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.605273 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.605831 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.606290 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.606741 4853 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.606838 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.608068 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.608555 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.610740 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.612478 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.613289 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.614256 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.614989 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.616113 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.616546 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.617581 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.618183 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.619087 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.619511 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.620410 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.620893 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.621934 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.622360 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.623130 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.623558 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.624539 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.625084 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.625512 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.627276 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.641128 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.653396 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.656347 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.668453 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.668873 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.671245 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.681047 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.692797 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.702538 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.713844 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.734519 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.750130 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.761724 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.773726 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.783894 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:33 crc kubenswrapper[4853]: I1209 16:56:33.795107 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.303824 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.305782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.305828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.305847 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.305930 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.319912 4853 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.320146 4853 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.321314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.321389 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.321410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.321436 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.321455 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: E1209 16:56:34.352178 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:34Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.357366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.357419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.357432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.357450 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.357463 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: E1209 16:56:34.368857 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:34Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.372809 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.372848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.372860 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.372878 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.372890 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: E1209 16:56:34.384886 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:34Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.388253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.388321 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.388332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.388345 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.388354 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: E1209 16:56:34.402516 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:34Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.406505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.406543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.406555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.406570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.406581 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: E1209 16:56:34.419909 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:34Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:34 crc kubenswrapper[4853]: E1209 16:56:34.420112 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.421724 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.421755 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.421768 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.421786 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.421798 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.524132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.524190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.524201 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.524218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.524229 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.627444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.627544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.627585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.627673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.627699 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.730463 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.730501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.730512 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.730527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.730539 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.832919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.832957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.832967 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.832981 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.832992 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.935239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.935287 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.935305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.935328 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:34 crc kubenswrapper[4853]: I1209 16:56:34.935345 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:34Z","lastTransitionTime":"2025-12-09T16:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.037994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.038049 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.038061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.038082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.038094 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.141348 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.141415 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.141429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.141452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.141475 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.244585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.244711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.244732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.244756 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.244771 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.277133 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.277305 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.277382 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.277446 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:56:39.277407932 +0000 UTC m=+26.212147174 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.277514 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.277646 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.277660 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:39.277575096 +0000 UTC m=+26.212314328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.277869 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:39.277763501 +0000 UTC m=+26.212502683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.348242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.348302 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.348315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.348336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.348352 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.378052 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.378122 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.378331 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.378356 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.378369 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.378370 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.378418 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.378433 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.378443 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:39.378424076 +0000 UTC m=+26.313163258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.378508 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:39.378483388 +0000 UTC m=+26.313222580 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.450674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.450733 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.450748 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.450772 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.450789 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.553014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.553085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.553108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.553141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.553164 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.566471 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.566471 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.566556 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.566773 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.566897 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:35 crc kubenswrapper[4853]: E1209 16:56:35.567051 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.656159 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.656230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.656250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.656278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.656306 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.680495 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.696640 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:35Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.724135 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:35Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.742689 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:35Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.756886 4853 csr.go:261] certificate signing request csr-mlbhk is approved, waiting to be issued Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.758452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.758509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.758525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.758546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.758571 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.759530 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:35Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.787399 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:35Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.789709 4853 csr.go:257] certificate signing request csr-mlbhk is issued Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.817042 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:35Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.844878 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:35Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.860499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.860560 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.860574 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.860606 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.860620 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.881881 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:35Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.963014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.963546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.963803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.964008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:35 crc kubenswrapper[4853]: I1209 16:56:35.964266 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:35Z","lastTransitionTime":"2025-12-09T16:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.066262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.066518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.066652 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.066755 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.066862 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.169469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.169505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.169513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.169527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.169536 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.271840 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.272344 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.272370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.272406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.272431 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.375282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.375330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.375343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.375360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.375372 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.477898 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.477940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.477953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.477969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.477980 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.580588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.580634 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.580643 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.580655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.580665 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.649306 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-svpfq"] Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.649611 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-svpfq" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.651877 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.651911 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.651980 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kwsj4"] Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.652632 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.652666 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fmrzg"] Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.652771 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.653151 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qzngg"] Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.653318 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.654128 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.654714 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.655481 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.655487 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.655517 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.655564 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.655645 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.655667 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.655784 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.655787 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.656198 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.656285 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.656451 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.667285 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.677777 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.682100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.682138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.682147 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.682158 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.682167 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.691477 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.701863 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.712120 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.723692 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.733771 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.754583 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.772073 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.785202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.785251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.785262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.785278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.785308 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.787296 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdq5\" (UniqueName: \"kubernetes.io/projected/8b02f072-d8cc-4c46-8159-fe99d19b24a6-kube-api-access-pjdq5\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.787345 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-system-cni-dir\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.787423 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788213 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1e036ba1-c8bd-48d7-bd93-71993300b60f-rootfs\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788252 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e036ba1-c8bd-48d7-bd93-71993300b60f-mcd-auth-proxy-config\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788274 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpw4\" (UniqueName: \"kubernetes.io/projected/1e036ba1-c8bd-48d7-bd93-71993300b60f-kube-api-access-xfpw4\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788294 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b02f072-d8cc-4c46-8159-fe99d19b24a6-cni-binary-copy\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788315 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-var-lib-cni-multus\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788384 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-var-lib-kubelet\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788532 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/31c5f775-f793-4d43-9503-0070fc5ba186-hosts-file\") pod \"node-resolver-svpfq\" (UID: \"31c5f775-f793-4d43-9503-0070fc5ba186\") " pod="openshift-dns/node-resolver-svpfq" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788663 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-conf-dir\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788733 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788772 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-os-release\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788799 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-hostroot\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788836 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-cnibin\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788869 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e036ba1-c8bd-48d7-bd93-71993300b60f-proxy-tls\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788898 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-system-cni-dir\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788927 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-run-k8s-cni-cncf-io\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.788973 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5faabd8c-2204-4f29-9961-392416e98677-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789053 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-var-lib-cni-bin\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789098 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-daemon-config\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789138 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8dg\" (UniqueName: \"kubernetes.io/projected/5faabd8c-2204-4f29-9961-392416e98677-kube-api-access-jm8dg\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789163 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92pjk\" (UniqueName: \"kubernetes.io/projected/31c5f775-f793-4d43-9503-0070fc5ba186-kube-api-access-92pjk\") pod \"node-resolver-svpfq\" (UID: \"31c5f775-f793-4d43-9503-0070fc5ba186\") " pod="openshift-dns/node-resolver-svpfq" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789185 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-socket-dir-parent\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789208 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-run-netns\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789245 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5faabd8c-2204-4f29-9961-392416e98677-cni-binary-copy\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789268 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-run-multus-certs\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789283 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-etc-kubernetes\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789316 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-cnibin\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789330 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-os-release\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.789344 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-cni-dir\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.790970 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-09 16:51:35 +0000 UTC, rotation deadline is 2026-09-28 09:42:48.250812894 +0000 UTC Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.791022 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7024h46m11.459794006s for next certificate rotation Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.803996 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.820332 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.835028 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.847083 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.860267 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.874791 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.888031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.888071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.888082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.888099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.888112 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.890487 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-var-lib-cni-bin\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.890578 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-var-lib-cni-bin\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.890637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-daemon-config\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.890664 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm8dg\" (UniqueName: \"kubernetes.io/projected/5faabd8c-2204-4f29-9961-392416e98677-kube-api-access-jm8dg\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.890707 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92pjk\" (UniqueName: \"kubernetes.io/projected/31c5f775-f793-4d43-9503-0070fc5ba186-kube-api-access-92pjk\") pod \"node-resolver-svpfq\" (UID: \"31c5f775-f793-4d43-9503-0070fc5ba186\") " pod="openshift-dns/node-resolver-svpfq" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.890729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-socket-dir-parent\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891087 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-run-netns\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891080 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-socket-dir-parent\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891113 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5faabd8c-2204-4f29-9961-392416e98677-cni-binary-copy\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891172 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-run-netns\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891239 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-run-multus-certs\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891302 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-daemon-config\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891368 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-etc-kubernetes\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891296 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-run-multus-certs\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891448 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-cnibin\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891482 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-os-release\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891509 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-cni-dir\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891532 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-cnibin\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891553 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjdq5\" (UniqueName: \"kubernetes.io/projected/8b02f072-d8cc-4c46-8159-fe99d19b24a6-kube-api-access-pjdq5\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891580 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-os-release\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891584 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1e036ba1-c8bd-48d7-bd93-71993300b60f-rootfs\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891658 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1e036ba1-c8bd-48d7-bd93-71993300b60f-rootfs\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e036ba1-c8bd-48d7-bd93-71993300b60f-mcd-auth-proxy-config\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891705 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfpw4\" (UniqueName: \"kubernetes.io/projected/1e036ba1-c8bd-48d7-bd93-71993300b60f-kube-api-access-xfpw4\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891575 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-etc-kubernetes\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891737 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-system-cni-dir\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891770 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b02f072-d8cc-4c46-8159-fe99d19b24a6-cni-binary-copy\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891785 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-system-cni-dir\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891719 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-cni-dir\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891801 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-var-lib-cni-multus\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891834 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-var-lib-kubelet\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891844 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-var-lib-cni-multus\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891872 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/31c5f775-f793-4d43-9503-0070fc5ba186-hosts-file\") pod \"node-resolver-svpfq\" (UID: \"31c5f775-f793-4d43-9503-0070fc5ba186\") " pod="openshift-dns/node-resolver-svpfq" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891890 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-var-lib-kubelet\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891893 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-conf-dir\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891916 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-multus-conf-dir\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891939 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891960 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/31c5f775-f793-4d43-9503-0070fc5ba186-hosts-file\") pod \"node-resolver-svpfq\" (UID: \"31c5f775-f793-4d43-9503-0070fc5ba186\") " pod="openshift-dns/node-resolver-svpfq" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.891970 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-os-release\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892029 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-hostroot\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892055 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-cnibin\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892078 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e036ba1-c8bd-48d7-bd93-71993300b60f-proxy-tls\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892096 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-system-cni-dir\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892098 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-hostroot\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892138 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-run-k8s-cni-cncf-io\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892115 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-host-run-k8s-cni-cncf-io\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892145 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-cnibin\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892179 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5faabd8c-2204-4f29-9961-392416e98677-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892028 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b02f072-d8cc-4c46-8159-fe99d19b24a6-os-release\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892187 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-system-cni-dir\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892373 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5faabd8c-2204-4f29-9961-392416e98677-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892777 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b02f072-d8cc-4c46-8159-fe99d19b24a6-cni-binary-copy\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.892993 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e036ba1-c8bd-48d7-bd93-71993300b60f-mcd-auth-proxy-config\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.893500 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5faabd8c-2204-4f29-9961-392416e98677-cni-binary-copy\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.893584 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5faabd8c-2204-4f29-9961-392416e98677-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.896985 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.899558 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e036ba1-c8bd-48d7-bd93-71993300b60f-proxy-tls\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.911536 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm8dg\" (UniqueName: \"kubernetes.io/projected/5faabd8c-2204-4f29-9961-392416e98677-kube-api-access-jm8dg\") pod \"multus-additional-cni-plugins-qzngg\" (UID: \"5faabd8c-2204-4f29-9961-392416e98677\") " pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.921295 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92pjk\" (UniqueName: \"kubernetes.io/projected/31c5f775-f793-4d43-9503-0070fc5ba186-kube-api-access-92pjk\") pod \"node-resolver-svpfq\" (UID: \"31c5f775-f793-4d43-9503-0070fc5ba186\") " pod="openshift-dns/node-resolver-svpfq" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.924414 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.928366 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjdq5\" (UniqueName: \"kubernetes.io/projected/8b02f072-d8cc-4c46-8159-fe99d19b24a6-kube-api-access-pjdq5\") pod \"multus-fmrzg\" (UID: \"8b02f072-d8cc-4c46-8159-fe99d19b24a6\") " pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.928593 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfpw4\" (UniqueName: \"kubernetes.io/projected/1e036ba1-c8bd-48d7-bd93-71993300b60f-kube-api-access-xfpw4\") pod \"machine-config-daemon-kwsj4\" (UID: \"1e036ba1-c8bd-48d7-bd93-71993300b60f\") " pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.948638 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.960152 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.962253 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-svpfq" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.970993 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:56:36 crc kubenswrapper[4853]: W1209 16:56:36.974197 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31c5f775_f793_4d43_9503_0070fc5ba186.slice/crio-51846bb6849f28f399fea7947b58572af02dec14c4ce735920277db408b0fe8f WatchSource:0}: Error finding container 51846bb6849f28f399fea7947b58572af02dec14c4ce735920277db408b0fe8f: Status 404 returned error can't find the container with id 51846bb6849f28f399fea7947b58572af02dec14c4ce735920277db408b0fe8f Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.977960 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fmrzg" Dec 09 16:56:36 crc kubenswrapper[4853]: W1209 16:56:36.982933 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e036ba1_c8bd_48d7_bd93_71993300b60f.slice/crio-29ad5fd4c7f9d92d3bec15cd7c64db2e18b8569c90fafd9cbbd84cc92e0f599a WatchSource:0}: Error finding container 29ad5fd4c7f9d92d3bec15cd7c64db2e18b8569c90fafd9cbbd84cc92e0f599a: Status 404 returned error can't find the container with id 29ad5fd4c7f9d92d3bec15cd7c64db2e18b8569c90fafd9cbbd84cc92e0f599a Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.985939 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qzngg" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.990290 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.991160 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.991215 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.991232 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.991919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:36 crc kubenswrapper[4853]: I1209 16:56:36.991937 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:36Z","lastTransitionTime":"2025-12-09T16:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.034173 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fzlgt"] Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.034943 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.037630 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.037662 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.037632 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.037902 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.038945 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.039346 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.039547 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.049883 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.061541 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.079995 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.091993 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.093890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.093924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.093932 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.093945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.093956 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:37Z","lastTransitionTime":"2025-12-09T16:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094645 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-ovn\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094726 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-systemd-units\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094746 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-var-lib-openvswitch\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094763 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-openvswitch\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-netd\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094817 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-node-log\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094847 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovn-node-metrics-cert\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094862 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-log-socket\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094884 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-ovn-kubernetes\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094926 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-config\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094941 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-kubelet\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.094990 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fmxt\" (UniqueName: \"kubernetes.io/projected/f18ca0bf-dc49-4000-97e9-9a64adac54de-kube-api-access-8fmxt\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.095006 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-env-overrides\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.095040 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-slash\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.095054 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-netns\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.095067 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.095119 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-script-lib\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.095176 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-bin\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.095221 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-etc-openvswitch\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.095335 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-systemd\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.109624 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.121224 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.134517 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.146120 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.167840 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.179932 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.190870 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.198821 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.198860 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.198868 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.198880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.198889 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:37Z","lastTransitionTime":"2025-12-09T16:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199416 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-systemd\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199460 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-etc-openvswitch\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199477 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-ovn\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199528 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-systemd-units\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199543 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-var-lib-openvswitch\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199556 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-openvswitch\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199570 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-netd\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199584 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-node-log\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199585 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-systemd\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199613 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovn-node-metrics-cert\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199662 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-ovn\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199700 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-log-socket\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199732 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-ovn-kubernetes\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199756 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-openvswitch\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199757 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-config\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199808 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-kubelet\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199828 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fmxt\" (UniqueName: \"kubernetes.io/projected/f18ca0bf-dc49-4000-97e9-9a64adac54de-kube-api-access-8fmxt\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199846 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-env-overrides\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199862 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-slash\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199883 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-netns\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199904 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-bin\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199925 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199947 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-script-lib\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200051 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-ovn-kubernetes\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200088 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-log-socket\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200112 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-netd\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200262 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-slash\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200326 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-netns\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200370 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-bin\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200399 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200437 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-node-log\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.199731 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-etc-openvswitch\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200484 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-var-lib-openvswitch\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200519 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-systemd-units\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200544 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-config\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200552 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-kubelet\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.200828 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-env-overrides\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.201118 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-script-lib\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.202664 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovn-node-metrics-cert\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.203325 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.215574 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fmxt\" (UniqueName: \"kubernetes.io/projected/f18ca0bf-dc49-4000-97e9-9a64adac54de-kube-api-access-8fmxt\") pod \"ovnkube-node-fzlgt\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.217118 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.301303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.301338 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.301346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.301360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.301368 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:37Z","lastTransitionTime":"2025-12-09T16:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.362930 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.403948 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.403988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.403999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.404017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.404028 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:37Z","lastTransitionTime":"2025-12-09T16:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: W1209 16:56:37.425334 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf18ca0bf_dc49_4000_97e9_9a64adac54de.slice/crio-8610dbc708f30c46e7f79b376a60751703324f95922ce5b64bd15ee9d73de750 WatchSource:0}: Error finding container 8610dbc708f30c46e7f79b376a60751703324f95922ce5b64bd15ee9d73de750: Status 404 returned error can't find the container with id 8610dbc708f30c46e7f79b376a60751703324f95922ce5b64bd15ee9d73de750 Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.505874 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.505915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.505927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.505945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.505957 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:37Z","lastTransitionTime":"2025-12-09T16:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.567184 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.567187 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.567308 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:37 crc kubenswrapper[4853]: E1209 16:56:37.567461 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:37 crc kubenswrapper[4853]: E1209 16:56:37.567553 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:37 crc kubenswrapper[4853]: E1209 16:56:37.567756 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.609300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.609355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.609371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.609394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.609410 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:37Z","lastTransitionTime":"2025-12-09T16:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.685636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"8610dbc708f30c46e7f79b376a60751703324f95922ce5b64bd15ee9d73de750"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.687119 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.687139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.687148 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"29ad5fd4c7f9d92d3bec15cd7c64db2e18b8569c90fafd9cbbd84cc92e0f599a"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.689995 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-svpfq" event={"ID":"31c5f775-f793-4d43-9503-0070fc5ba186","Type":"ContainerStarted","Data":"07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.690131 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-svpfq" event={"ID":"31c5f775-f793-4d43-9503-0070fc5ba186","Type":"ContainerStarted","Data":"51846bb6849f28f399fea7947b58572af02dec14c4ce735920277db408b0fe8f"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.691975 4853 generic.go:334] "Generic (PLEG): container finished" podID="5faabd8c-2204-4f29-9961-392416e98677" containerID="6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d" exitCode=0 Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.692090 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" event={"ID":"5faabd8c-2204-4f29-9961-392416e98677","Type":"ContainerDied","Data":"6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.692203 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" event={"ID":"5faabd8c-2204-4f29-9961-392416e98677","Type":"ContainerStarted","Data":"2674de4c350729a78b8f1ddde797f8f8b1f3cbe9395221cec5a4089f0649b0aa"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.694218 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmrzg" event={"ID":"8b02f072-d8cc-4c46-8159-fe99d19b24a6","Type":"ContainerStarted","Data":"9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.694260 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmrzg" event={"ID":"8b02f072-d8cc-4c46-8159-fe99d19b24a6","Type":"ContainerStarted","Data":"a321eb77c9863cd34ca0b491a8d6d8290fcb30cce444113682bdff77fea20b3c"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.698779 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.711093 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.712929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.712953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.713173 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.713203 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.713380 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:37Z","lastTransitionTime":"2025-12-09T16:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.731359 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.749361 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.767935 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.780733 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.817990 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.818017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.818025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.818039 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.818048 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:37Z","lastTransitionTime":"2025-12-09T16:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.820859 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.836981 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.859925 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.873259 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.884765 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.894827 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.913701 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.920068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.920098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.920109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.920126 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.920137 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:37Z","lastTransitionTime":"2025-12-09T16:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.935736 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.949015 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.960471 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.972673 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.985127 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:37 crc kubenswrapper[4853]: I1209 16:56:37.997655 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:37Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.012466 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.022495 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.022553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.022567 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.022587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.022622 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.030467 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.045060 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.058003 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.069813 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.080302 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.097674 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.103588 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tw8jq"] Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.104024 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.105789 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.105981 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.106896 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.107044 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.122285 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.126237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.126278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.126287 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.126302 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.126312 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.138760 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.156699 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.168097 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.177191 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.203806 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.210114 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppb25\" (UniqueName: \"kubernetes.io/projected/ac35c63b-ab53-469b-99f8-f2a354be323d-kube-api-access-ppb25\") pod \"node-ca-tw8jq\" (UID: \"ac35c63b-ab53-469b-99f8-f2a354be323d\") " pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.210160 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac35c63b-ab53-469b-99f8-f2a354be323d-host\") pod \"node-ca-tw8jq\" (UID: \"ac35c63b-ab53-469b-99f8-f2a354be323d\") " pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.210191 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac35c63b-ab53-469b-99f8-f2a354be323d-serviceca\") pod \"node-ca-tw8jq\" (UID: \"ac35c63b-ab53-469b-99f8-f2a354be323d\") " pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.228702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.228760 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.228775 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.228797 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.228811 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.228764 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.243751 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.257679 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.272413 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.284651 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.296694 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.309103 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.311056 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppb25\" (UniqueName: \"kubernetes.io/projected/ac35c63b-ab53-469b-99f8-f2a354be323d-kube-api-access-ppb25\") pod \"node-ca-tw8jq\" (UID: \"ac35c63b-ab53-469b-99f8-f2a354be323d\") " pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.311187 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac35c63b-ab53-469b-99f8-f2a354be323d-host\") pod \"node-ca-tw8jq\" (UID: \"ac35c63b-ab53-469b-99f8-f2a354be323d\") " pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.311249 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac35c63b-ab53-469b-99f8-f2a354be323d-host\") pod \"node-ca-tw8jq\" (UID: \"ac35c63b-ab53-469b-99f8-f2a354be323d\") " pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.311271 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac35c63b-ab53-469b-99f8-f2a354be323d-serviceca\") pod \"node-ca-tw8jq\" (UID: \"ac35c63b-ab53-469b-99f8-f2a354be323d\") " pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.312381 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac35c63b-ab53-469b-99f8-f2a354be323d-serviceca\") pod \"node-ca-tw8jq\" (UID: \"ac35c63b-ab53-469b-99f8-f2a354be323d\") " pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.317707 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.328509 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppb25\" (UniqueName: \"kubernetes.io/projected/ac35c63b-ab53-469b-99f8-f2a354be323d-kube-api-access-ppb25\") pod \"node-ca-tw8jq\" (UID: \"ac35c63b-ab53-469b-99f8-f2a354be323d\") " pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.330413 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.330444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.330452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.330466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.330476 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.428219 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tw8jq" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.433125 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.433157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.433167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.433182 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.433196 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: W1209 16:56:38.445872 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac35c63b_ab53_469b_99f8_f2a354be323d.slice/crio-d1a892e3178d0d7f5869347b421089c8d42ca6d162ea895db3056c171fa0db1d WatchSource:0}: Error finding container d1a892e3178d0d7f5869347b421089c8d42ca6d162ea895db3056c171fa0db1d: Status 404 returned error can't find the container with id d1a892e3178d0d7f5869347b421089c8d42ca6d162ea895db3056c171fa0db1d Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.536907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.537175 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.537183 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.537198 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.537209 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.638879 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.638919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.638927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.638940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.638950 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.699158 4853 generic.go:334] "Generic (PLEG): container finished" podID="5faabd8c-2204-4f29-9961-392416e98677" containerID="a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88" exitCode=0 Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.699241 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" event={"ID":"5faabd8c-2204-4f29-9961-392416e98677","Type":"ContainerDied","Data":"a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.702888 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44" exitCode=0 Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.703192 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.704752 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tw8jq" event={"ID":"ac35c63b-ab53-469b-99f8-f2a354be323d","Type":"ContainerStarted","Data":"d1a892e3178d0d7f5869347b421089c8d42ca6d162ea895db3056c171fa0db1d"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.719256 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.731857 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.744811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.744844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.744855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.744871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.744882 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.744970 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.759941 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.769376 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.783458 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.794527 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.813617 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.827795 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.837669 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.848376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.848411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.848422 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.848437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.848448 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.857065 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.869541 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.872880 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.875535 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.876915 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.887670 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.901610 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.911735 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.928065 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.950816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.950848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.950857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.950871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.950881 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:38Z","lastTransitionTime":"2025-12-09T16:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:38 crc kubenswrapper[4853]: I1209 16:56:38.959560 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.001763 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:38Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.040839 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.053448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.053486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.053499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.053516 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.053529 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.080636 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.126367 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.156076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.156113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.156126 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.156143 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.156154 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.165123 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.205021 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.236803 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.258518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.258557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.258572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.258588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.258614 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.280042 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.319651 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.322099 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.322332 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:56:47.322303222 +0000 UTC m=+34.257042424 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.322439 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.322512 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.322534 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.322640 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:47.32261692 +0000 UTC m=+34.257356122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.322656 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.322751 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:47.322704393 +0000 UTC m=+34.257443585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.361251 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.361459 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.361476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.361484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.361497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.361506 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.401293 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.423179 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.423221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.423341 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.423364 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.423373 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.423413 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:47.423401239 +0000 UTC m=+34.358140421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.423424 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.423473 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.423488 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.423540 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:47.423524582 +0000 UTC m=+34.358263764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.437731 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.464158 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.464212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.464226 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.464246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.464259 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.566331 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.566394 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.566403 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.566485 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.566683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.566708 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.566716 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.566731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.566740 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.566874 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.566948 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.669267 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.669317 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.669332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.669352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.669367 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.711182 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.711232 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.711246 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.711258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.711272 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.711284 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.712550 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tw8jq" event={"ID":"ac35c63b-ab53-469b-99f8-f2a354be323d","Type":"ContainerStarted","Data":"5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.715196 4853 generic.go:334] "Generic (PLEG): container finished" podID="5faabd8c-2204-4f29-9961-392416e98677" containerID="1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa" exitCode=0 Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.715293 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" event={"ID":"5faabd8c-2204-4f29-9961-392416e98677","Type":"ContainerDied","Data":"1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa"} Dec 09 16:56:39 crc kubenswrapper[4853]: E1209 16:56:39.721768 4853 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.732374 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.744737 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.758499 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.771137 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.771207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.771230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.771242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.771259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.771269 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.782372 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.802216 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.819304 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.830792 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.842320 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.859488 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.873195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.873266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.873287 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.873308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.873322 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.901640 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.938967 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.976244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.976289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.976300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.976314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.976327 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:39Z","lastTransitionTime":"2025-12-09T16:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:39 crc kubenswrapper[4853]: I1209 16:56:39.980940 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:39Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.020967 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.059174 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.078779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.078814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.078827 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.078846 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.078860 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.098463 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.139720 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.180018 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.181568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.181615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.181628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.181644 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.181654 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.220711 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.266832 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.283689 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.283729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.283741 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.283757 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.283772 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.306748 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.338037 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.383968 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.385577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.385637 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.385649 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.385669 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.385681 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.421452 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.459512 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.488485 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.488524 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.488533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.488547 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.488556 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.498074 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.540121 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.586240 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.590683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.590715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.590724 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.590738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.590747 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.620497 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.660727 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.692429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.692464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.692472 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.692484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.692493 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.721136 4853 generic.go:334] "Generic (PLEG): container finished" podID="5faabd8c-2204-4f29-9961-392416e98677" containerID="bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e" exitCode=0 Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.721197 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" event={"ID":"5faabd8c-2204-4f29-9961-392416e98677","Type":"ContainerDied","Data":"bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.736565 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.752571 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.778324 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.794213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.794246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.794254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.794269 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.794281 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.821910 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.859766 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.896434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.896477 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.896487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.896503 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.896514 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.900929 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.938621 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.981623 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:40Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.998750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.998790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.998801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.998816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:40 crc kubenswrapper[4853]: I1209 16:56:40.998826 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:40Z","lastTransitionTime":"2025-12-09T16:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.018261 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.097567 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.101257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.101289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.101297 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.101308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.101319 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:41Z","lastTransitionTime":"2025-12-09T16:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.113193 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.138860 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.182327 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.203237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.203295 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.203518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.203546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.203809 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:41Z","lastTransitionTime":"2025-12-09T16:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.224896 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.261354 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.306848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.306896 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.306912 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.306934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.306951 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:41Z","lastTransitionTime":"2025-12-09T16:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.409666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.409710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.409721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.409736 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.409748 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:41Z","lastTransitionTime":"2025-12-09T16:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.512744 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.512800 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.512818 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.512842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.512860 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:41Z","lastTransitionTime":"2025-12-09T16:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.566917 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.566917 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:41 crc kubenswrapper[4853]: E1209 16:56:41.567071 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:41 crc kubenswrapper[4853]: E1209 16:56:41.567248 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.567448 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:41 crc kubenswrapper[4853]: E1209 16:56:41.567639 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.616108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.616162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.616177 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.616197 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.616211 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:41Z","lastTransitionTime":"2025-12-09T16:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.719107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.719164 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.719174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.719190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.719200 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:41Z","lastTransitionTime":"2025-12-09T16:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.728648 4853 generic.go:334] "Generic (PLEG): container finished" podID="5faabd8c-2204-4f29-9961-392416e98677" containerID="42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561" exitCode=0 Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.728661 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" event={"ID":"5faabd8c-2204-4f29-9961-392416e98677","Type":"ContainerDied","Data":"42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.752846 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.774185 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.795817 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.811221 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.822874 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.822931 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.822948 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.822969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.822984 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:41Z","lastTransitionTime":"2025-12-09T16:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.825487 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.844625 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.867052 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.878760 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.891384 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.902716 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.913283 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.924699 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.924737 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.924748 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.924761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.924770 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:41Z","lastTransitionTime":"2025-12-09T16:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.925352 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.936651 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.949160 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:41 crc kubenswrapper[4853]: I1209 16:56:41.959473 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:41Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.027352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.027388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.027399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.027414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.027424 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.130506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.130548 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.130560 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.130579 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.130590 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.233060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.233092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.233101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.233114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.233122 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.335775 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.335842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.335865 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.335894 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.335919 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.438721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.438779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.438791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.438810 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.438823 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.541180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.541242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.541263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.541293 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.541313 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.643961 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.644011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.644026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.644046 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.644061 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.735670 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.738849 4853 generic.go:334] "Generic (PLEG): container finished" podID="5faabd8c-2204-4f29-9961-392416e98677" containerID="65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501" exitCode=0 Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.738915 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" event={"ID":"5faabd8c-2204-4f29-9961-392416e98677","Type":"ContainerDied","Data":"65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.745824 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.745875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.745886 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.745899 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.745910 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.757985 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.773401 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.786856 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.802197 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.815524 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.836129 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.851133 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.853406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.853466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.853492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.853523 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.853543 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.872118 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.889764 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.902190 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.923536 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.955875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.955902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.955912 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.955926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.955938 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:42Z","lastTransitionTime":"2025-12-09T16:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.957096 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.973344 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.985683 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:42 crc kubenswrapper[4853]: I1209 16:56:42.998357 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:42Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.058367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.058403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.058412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.058426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.058435 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:43Z","lastTransitionTime":"2025-12-09T16:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.161806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.161839 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.161850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.161864 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.161875 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:43Z","lastTransitionTime":"2025-12-09T16:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.315558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.315608 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.315620 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.315637 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.315646 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:43Z","lastTransitionTime":"2025-12-09T16:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.418130 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.418645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.418675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.418708 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.418730 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:43Z","lastTransitionTime":"2025-12-09T16:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.422319 4853 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.521724 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.521760 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.521771 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.521786 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.521796 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:43Z","lastTransitionTime":"2025-12-09T16:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.566375 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.566435 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.566397 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:43 crc kubenswrapper[4853]: E1209 16:56:43.566582 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:43 crc kubenswrapper[4853]: E1209 16:56:43.566645 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:43 crc kubenswrapper[4853]: E1209 16:56:43.566695 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.599137 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.619958 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.624930 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.624966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.624977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.624993 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.625005 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:43Z","lastTransitionTime":"2025-12-09T16:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.637116 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.659258 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.674877 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.694154 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.709389 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.727209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.727250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.727264 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.727282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.727295 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:43Z","lastTransitionTime":"2025-12-09T16:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.728835 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.738444 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.749094 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" event={"ID":"5faabd8c-2204-4f29-9961-392416e98677","Type":"ContainerStarted","Data":"d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.756203 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.768217 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.780768 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.791889 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.802628 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.824328 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.829616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.829683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.829701 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.829726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.829744 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:43Z","lastTransitionTime":"2025-12-09T16:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.841862 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.858045 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.873019 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.886946 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.895567 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.909401 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.919578 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.931715 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.932855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.932894 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.932910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.932927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.932941 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:43Z","lastTransitionTime":"2025-12-09T16:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.942860 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.955495 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:43 crc kubenswrapper[4853]: I1209 16:56:43.977999 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.004585 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.018812 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.036411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.036482 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.036506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.036537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.036558 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.038312 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.052551 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.138341 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.138371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.138380 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.138393 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.138401 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.241099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.241128 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.241137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.241150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.241159 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.343950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.343996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.344008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.344025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.344037 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.446309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.446659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.446679 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.446699 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.446712 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.549568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.549636 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.549646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.549660 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.549671 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.593543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.593588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.593617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.593634 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.593647 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: E1209 16:56:44.616029 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.621812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.621877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.621899 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.621926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.622013 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: E1209 16:56:44.645793 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.651535 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.651630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.651682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.651702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.651714 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: E1209 16:56:44.670241 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.675739 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.675798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.675815 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.675842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.675862 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: E1209 16:56:44.690696 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.695262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.695304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.695316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.695334 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.695349 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: E1209 16:56:44.708784 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: E1209 16:56:44.708901 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.710984 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.711015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.711025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.711042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.711054 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.767421 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.782830 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.801444 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.814230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.814369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.814392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.814418 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.814476 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.819021 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.840010 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.852910 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.882738 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.903369 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.919070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.919144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.919165 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.919190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.919208 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:44Z","lastTransitionTime":"2025-12-09T16:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.924844 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.941127 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.968917 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:44 crc kubenswrapper[4853]: I1209 16:56:44.989906 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:44Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.005698 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.021050 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.021085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.021093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.021106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.021116 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.024518 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.034187 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.044903 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.123113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.123161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.123169 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.123186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.123196 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.225972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.226039 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.226057 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.226081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.226099 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.329071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.329144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.329170 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.329200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.329226 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.432159 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.432201 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.432213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.432224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.432233 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.534748 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.534842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.534868 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.534892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.534912 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.566385 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.566422 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.566434 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:45 crc kubenswrapper[4853]: E1209 16:56:45.566522 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:45 crc kubenswrapper[4853]: E1209 16:56:45.566644 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:45 crc kubenswrapper[4853]: E1209 16:56:45.566745 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.637113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.637171 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.637182 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.637195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.637203 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.739365 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.739434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.739487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.739521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.739547 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.770916 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.770962 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.770978 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.807950 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.809969 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.828657 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.841964 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.842002 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.842012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.842028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.842037 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.845478 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.859796 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.872249 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.882387 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.899651 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.918022 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.932532 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.943907 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.944410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.944522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.944633 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.944750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.944834 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:45Z","lastTransitionTime":"2025-12-09T16:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.957577 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.976730 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:45 crc kubenswrapper[4853]: I1209 16:56:45.990216 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:45Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.004120 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.019303 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.028629 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.039012 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.046829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.046866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.046878 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.046895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.046909 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.049359 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.058139 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.072830 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.088902 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.098646 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.118894 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.142340 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.149495 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.149519 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.149528 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.149541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.149550 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.166674 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.181285 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.196620 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.211226 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.220820 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.241295 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.251751 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.251784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.251795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.251811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.251823 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.257159 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:46Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.353645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.353680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.353691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.353707 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.353718 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.455680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.455718 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.455731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.455748 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.455760 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.557663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.557691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.557698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.557713 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.557722 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.660951 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.661016 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.661041 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.661068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.661087 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.764320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.764405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.764429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.764465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.764494 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.867552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.867615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.867625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.867640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.867650 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.969474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.969552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.969571 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.969623 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:46 crc kubenswrapper[4853]: I1209 16:56:46.969644 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:46Z","lastTransitionTime":"2025-12-09T16:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.072354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.072455 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.072476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.072500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.072521 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.175621 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.175658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.175667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.175680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.175689 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.279110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.279162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.279177 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.279198 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.279215 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.344490 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.344776 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.344801 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:57:03.344757874 +0000 UTC m=+50.279497086 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.344918 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.344957 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.345037 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:03.345017561 +0000 UTC m=+50.279756743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.345036 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.345119 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:03.345096353 +0000 UTC m=+50.279835575 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.382179 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.382214 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.382223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.382238 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.382250 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.446351 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.446401 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.446543 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.446563 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.446577 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.446637 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.446675 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.446689 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.446655 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:03.44663822 +0000 UTC m=+50.381377412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.446751 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:03.446731773 +0000 UTC m=+50.381470965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.485133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.485190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.485206 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.485230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.485248 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.566472 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.566479 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.566650 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.566491 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.566743 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:47 crc kubenswrapper[4853]: E1209 16:56:47.566770 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.587447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.587490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.587499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.587515 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.587524 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.690753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.690823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.690844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.690867 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.690884 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.779145 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/0.log" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.781896 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf" exitCode=1 Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.781942 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.783094 4853 scope.go:117] "RemoveContainer" containerID="a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.792820 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.792848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.792857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.792870 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.792880 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.807060 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.823638 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.838320 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.867187 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:47Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 16:56:47.329372 6168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 16:56:47.329394 6168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:56:47.329400 6168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 16:56:47.329411 6168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 16:56:47.329422 6168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 16:56:47.329434 6168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:56:47.329438 6168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 16:56:47.329459 6168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 16:56:47.329578 6168 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 16:56:47.329608 6168 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 16:56:47.329617 6168 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 16:56:47.329624 6168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 16:56:47.329630 6168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 16:56:47.329637 6168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:56:47.329905 6168 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.891132 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.895145 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.895197 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.895215 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.895240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.895258 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.909742 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.926306 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.941426 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.961193 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.976911 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.990226 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:47Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.997834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.997900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.997918 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.997943 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:47 crc kubenswrapper[4853]: I1209 16:56:47.997964 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:47Z","lastTransitionTime":"2025-12-09T16:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.008879 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.019569 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.036514 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.048882 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.100505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.100537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.100545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.100557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.100566 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:48Z","lastTransitionTime":"2025-12-09T16:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.210855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.210901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.210942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.210964 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.210973 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:48Z","lastTransitionTime":"2025-12-09T16:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.313464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.313501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.313512 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.313527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.313537 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:48Z","lastTransitionTime":"2025-12-09T16:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.416160 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.416205 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.416224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.416246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.416264 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:48Z","lastTransitionTime":"2025-12-09T16:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.518969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.519001 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.519019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.519038 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.519048 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:48Z","lastTransitionTime":"2025-12-09T16:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.621274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.621377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.621396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.621422 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.621439 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:48Z","lastTransitionTime":"2025-12-09T16:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.724541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.724622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.724633 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.724651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.724663 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:48Z","lastTransitionTime":"2025-12-09T16:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.790236 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/0.log" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.793591 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.794174 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.818260 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.827163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.827209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.827221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.827237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.827250 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:48Z","lastTransitionTime":"2025-12-09T16:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.838846 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.851209 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.864466 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.879881 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.895291 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.911136 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.933312 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.941755 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh"] Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.942132 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.943460 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.943486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.943497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.943509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.943519 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:48Z","lastTransitionTime":"2025-12-09T16:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.944912 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.945634 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.947099 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.962033 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.974421 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.986339 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:48 crc kubenswrapper[4853]: I1209 16:56:48.996998 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.007779 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.017422 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/22a4b32a-bcd8-400d-956d-6971df0d5c03-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.017646 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/22a4b32a-bcd8-400d-956d-6971df0d5c03-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.017798 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rkrv\" (UniqueName: \"kubernetes.io/projected/22a4b32a-bcd8-400d-956d-6971df0d5c03-kube-api-access-6rkrv\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.017924 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/22a4b32a-bcd8-400d-956d-6971df0d5c03-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.023705 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:47Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 16:56:47.329372 6168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 16:56:47.329394 6168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:56:47.329400 6168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 16:56:47.329411 6168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 16:56:47.329422 6168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 16:56:47.329434 6168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:56:47.329438 6168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 16:56:47.329459 6168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 16:56:47.329578 6168 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 16:56:47.329608 6168 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 16:56:47.329617 6168 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 16:56:47.329624 6168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 16:56:47.329630 6168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 16:56:47.329637 6168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:56:47.329905 6168 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.038265 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.045285 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.045321 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.045329 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.045343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.045353 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.051451 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.064095 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.076820 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.088655 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.101488 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.115742 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.118338 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/22a4b32a-bcd8-400d-956d-6971df0d5c03-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.118365 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/22a4b32a-bcd8-400d-956d-6971df0d5c03-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.118398 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rkrv\" (UniqueName: \"kubernetes.io/projected/22a4b32a-bcd8-400d-956d-6971df0d5c03-kube-api-access-6rkrv\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.118416 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/22a4b32a-bcd8-400d-956d-6971df0d5c03-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.119818 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/22a4b32a-bcd8-400d-956d-6971df0d5c03-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.121720 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/22a4b32a-bcd8-400d-956d-6971df0d5c03-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.130434 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/22a4b32a-bcd8-400d-956d-6971df0d5c03-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.132274 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.143294 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rkrv\" (UniqueName: \"kubernetes.io/projected/22a4b32a-bcd8-400d-956d-6971df0d5c03-kube-api-access-6rkrv\") pod \"ovnkube-control-plane-749d76644c-x2mnh\" (UID: \"22a4b32a-bcd8-400d-956d-6971df0d5c03\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.146108 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.148213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.148245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.148257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.148277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.148289 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.156104 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.172563 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:47Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 16:56:47.329372 6168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 16:56:47.329394 6168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:56:47.329400 6168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 16:56:47.329411 6168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 16:56:47.329422 6168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 16:56:47.329434 6168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:56:47.329438 6168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 16:56:47.329459 6168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 16:56:47.329578 6168 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 16:56:47.329608 6168 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 16:56:47.329617 6168 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 16:56:47.329624 6168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 16:56:47.329630 6168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 16:56:47.329637 6168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:56:47.329905 6168 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.183964 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.205848 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.224783 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.244452 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.250731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.250795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.250816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.250844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.250863 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.253539 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.264252 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: W1209 16:56:49.266460 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22a4b32a_bcd8_400d_956d_6971df0d5c03.slice/crio-9a1bf37d869b3db65724255ee2c194975a9db7e9dfa1d433c1f131b35260df42 WatchSource:0}: Error finding container 9a1bf37d869b3db65724255ee2c194975a9db7e9dfa1d433c1f131b35260df42: Status 404 returned error can't find the container with id 9a1bf37d869b3db65724255ee2c194975a9db7e9dfa1d433c1f131b35260df42 Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.354061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.354114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.354128 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.354146 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.354156 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.457031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.457071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.457082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.457098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.457111 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.559487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.559530 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.559542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.559557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.559570 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.571535 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.571547 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.571621 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:49 crc kubenswrapper[4853]: E1209 16:56:49.571733 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:49 crc kubenswrapper[4853]: E1209 16:56:49.572033 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:49 crc kubenswrapper[4853]: E1209 16:56:49.572256 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.621288 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.639083 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.659144 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.662306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.662367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.662386 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.662411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.662430 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.676846 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.709902 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:47Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 16:56:47.329372 6168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 16:56:47.329394 6168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:56:47.329400 6168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 16:56:47.329411 6168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 16:56:47.329422 6168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 16:56:47.329434 6168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:56:47.329438 6168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 16:56:47.329459 6168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 16:56:47.329578 6168 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 16:56:47.329608 6168 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 16:56:47.329617 6168 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 16:56:47.329624 6168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 16:56:47.329630 6168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 16:56:47.329637 6168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:56:47.329905 6168 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.728130 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.759291 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.765372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.765415 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.765428 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.765445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.765458 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.780378 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.799302 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" event={"ID":"22a4b32a-bcd8-400d-956d-6971df0d5c03","Type":"ContainerStarted","Data":"9a1bf37d869b3db65724255ee2c194975a9db7e9dfa1d433c1f131b35260df42"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.799983 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.801990 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/1.log" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.802993 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/0.log" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.807425 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af" exitCode=1 Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.807478 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.807528 4853 scope.go:117] "RemoveContainer" containerID="a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.808513 4853 scope.go:117] "RemoveContainer" containerID="26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af" Dec 09 16:56:49 crc kubenswrapper[4853]: E1209 16:56:49.808794 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.819333 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.837267 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.855124 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.871858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.871893 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.871901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.871914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.871926 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.872135 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.888867 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.900220 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.914445 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.927678 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.942652 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.953653 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.964899 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.974218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.974268 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.974303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.974323 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.974338 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:49Z","lastTransitionTime":"2025-12-09T16:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:49 crc kubenswrapper[4853]: I1209 16:56:49.978383 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.000657 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:47Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 16:56:47.329372 6168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 16:56:47.329394 6168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:56:47.329400 6168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 16:56:47.329411 6168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 16:56:47.329422 6168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 16:56:47.329434 6168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:56:47.329438 6168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 16:56:47.329459 6168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 16:56:47.329578 6168 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 16:56:47.329608 6168 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 16:56:47.329617 6168 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 16:56:47.329624 6168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 16:56:47.329630 6168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 16:56:47.329637 6168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:56:47.329905 6168 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"message\\\":\\\"ft-multus/multus-fmrzg\\\\nI1209 16:56:48.585821 6310 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1209 16:56:48.586023 6310 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1209 16:56:48.586023 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z]\\\\nI1209 16:56:48.586029 6310 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1209 16:56:48.586034 6310 obj_retry.go:386] Retry successful for *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:49Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.021106 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.038165 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.048704 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.061261 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.076756 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.076806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.076816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.076832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.076842 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:50Z","lastTransitionTime":"2025-12-09T16:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.083522 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.098305 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.112430 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.125168 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.135119 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.145849 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.156120 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.179377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.179421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.179429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.179444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.179453 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:50Z","lastTransitionTime":"2025-12-09T16:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.282910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.282981 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.282999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.283026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.283044 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:50Z","lastTransitionTime":"2025-12-09T16:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.385945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.386020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.386042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.386073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.386097 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:50Z","lastTransitionTime":"2025-12-09T16:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.422129 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-77995"] Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.423052 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:50 crc kubenswrapper[4853]: E1209 16:56:50.423181 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.430910 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q656q\" (UniqueName: \"kubernetes.io/projected/7d55def8-578d-461b-9514-07eea9c62336-kube-api-access-q656q\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.430981 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.439888 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.462619 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:47Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 16:56:47.329372 6168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 16:56:47.329394 6168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:56:47.329400 6168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 16:56:47.329411 6168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 16:56:47.329422 6168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 16:56:47.329434 6168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:56:47.329438 6168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 16:56:47.329459 6168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 16:56:47.329578 6168 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 16:56:47.329608 6168 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 16:56:47.329617 6168 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 16:56:47.329624 6168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 16:56:47.329630 6168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 16:56:47.329637 6168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:56:47.329905 6168 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"message\\\":\\\"ft-multus/multus-fmrzg\\\\nI1209 16:56:48.585821 6310 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1209 16:56:48.586023 6310 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1209 16:56:48.586023 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z]\\\\nI1209 16:56:48.586029 6310 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1209 16:56:48.586034 6310 obj_retry.go:386] Retry successful for *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.474128 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.483629 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.488364 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.488393 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.488402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.488416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.488425 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:50Z","lastTransitionTime":"2025-12-09T16:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.497324 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.513582 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.526027 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.532000 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q656q\" (UniqueName: \"kubernetes.io/projected/7d55def8-578d-461b-9514-07eea9c62336-kube-api-access-q656q\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.532064 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:50 crc kubenswrapper[4853]: E1209 16:56:50.532167 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:50 crc kubenswrapper[4853]: E1209 16:56:50.532217 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs podName:7d55def8-578d-461b-9514-07eea9c62336 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:51.032202674 +0000 UTC m=+37.966941856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs") pod "network-metrics-daemon-77995" (UID: "7d55def8-578d-461b-9514-07eea9c62336") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.543258 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.560647 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.561362 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q656q\" (UniqueName: \"kubernetes.io/projected/7d55def8-578d-461b-9514-07eea9c62336-kube-api-access-q656q\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.580564 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.591640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.591696 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.591708 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.591729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.591753 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:50Z","lastTransitionTime":"2025-12-09T16:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.598390 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.612309 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.626585 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.643889 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.662754 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.684652 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.694261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.694327 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.694345 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.694371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.694389 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:50Z","lastTransitionTime":"2025-12-09T16:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.701837 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.798179 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.798258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.798282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.798314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.798339 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:50Z","lastTransitionTime":"2025-12-09T16:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.819225 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" event={"ID":"22a4b32a-bcd8-400d-956d-6971df0d5c03","Type":"ContainerStarted","Data":"06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.819316 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" event={"ID":"22a4b32a-bcd8-400d-956d-6971df0d5c03","Type":"ContainerStarted","Data":"d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.821565 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/1.log" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.828016 4853 scope.go:117] "RemoveContainer" containerID="26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af" Dec 09 16:56:50 crc kubenswrapper[4853]: E1209 16:56:50.828340 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.863812 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.886285 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.901037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.901086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.901099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.901116 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.901130 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:50Z","lastTransitionTime":"2025-12-09T16:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.909546 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.921464 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.946718 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07af54fe2a45a82637a76b170b098df9c93ad583c3037a96e24f0cfceaea3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:47Z\\\",\\\"message\\\":\\\"ork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 16:56:47.329372 6168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 16:56:47.329394 6168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:56:47.329400 6168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 16:56:47.329411 6168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 16:56:47.329422 6168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 16:56:47.329434 6168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:56:47.329438 6168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 16:56:47.329459 6168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 16:56:47.329578 6168 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 16:56:47.329608 6168 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 16:56:47.329617 6168 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 16:56:47.329624 6168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 16:56:47.329630 6168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 16:56:47.329637 6168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:56:47.329905 6168 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"message\\\":\\\"ft-multus/multus-fmrzg\\\\nI1209 16:56:48.585821 6310 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1209 16:56:48.586023 6310 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1209 16:56:48.586023 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z]\\\\nI1209 16:56:48.586029 6310 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1209 16:56:48.586034 6310 obj_retry.go:386] Retry successful for *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.961281 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.977858 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:50 crc kubenswrapper[4853]: I1209 16:56:50.996191 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:50Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.003316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.003363 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.003378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.003395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.003410 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.012269 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.025033 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.040086 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:51 crc kubenswrapper[4853]: E1209 16:56:51.040138 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:51 crc kubenswrapper[4853]: E1209 16:56:51.040237 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs podName:7d55def8-578d-461b-9514-07eea9c62336 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:52.040212823 +0000 UTC m=+38.974952075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs") pod "network-metrics-daemon-77995" (UID: "7d55def8-578d-461b-9514-07eea9c62336") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.049267 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.062297 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.076253 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.091188 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.100687 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.105339 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.105363 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.105371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.105384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.105392 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.112649 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.123677 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.141423 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.154021 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.167550 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.182738 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.199963 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.207646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.207683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.207696 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.207711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.207724 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.215184 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.229721 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.245581 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.257006 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.274310 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.287065 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.305244 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.311114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.311157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.311171 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.311189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.311205 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.320512 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.334572 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.376113 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"message\\\":\\\"ft-multus/multus-fmrzg\\\\nI1209 16:56:48.585821 6310 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1209 16:56:48.586023 6310 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1209 16:56:48.586023 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z]\\\\nI1209 16:56:48.586029 6310 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1209 16:56:48.586034 6310 obj_retry.go:386] Retry successful for *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.408046 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.414739 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.415027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.415047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.415072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.415090 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.448378 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:51Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.518098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.518145 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.518161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.518181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.518195 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.566837 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.566871 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.566930 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:51 crc kubenswrapper[4853]: E1209 16:56:51.567024 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:51 crc kubenswrapper[4853]: E1209 16:56:51.567229 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:51 crc kubenswrapper[4853]: E1209 16:56:51.567512 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.620746 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.620790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.620802 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.620819 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.620831 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.723356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.723405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.723420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.723439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.723452 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.826533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.826667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.826689 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.826715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.826734 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.929732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.929788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.929805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.929823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:51 crc kubenswrapper[4853]: I1209 16:56:51.929835 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:51Z","lastTransitionTime":"2025-12-09T16:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.032384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.032754 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.032770 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.032788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.032800 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.049226 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:52 crc kubenswrapper[4853]: E1209 16:56:52.049383 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:52 crc kubenswrapper[4853]: E1209 16:56:52.049440 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs podName:7d55def8-578d-461b-9514-07eea9c62336 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:54.049421795 +0000 UTC m=+40.984160977 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs") pod "network-metrics-daemon-77995" (UID: "7d55def8-578d-461b-9514-07eea9c62336") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.135154 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.135227 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.135245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.135268 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.135286 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.238788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.238858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.238882 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.238915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.238938 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.341279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.341326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.341337 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.341354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.341368 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.444051 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.444106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.444116 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.444132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.444144 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.547060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.547104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.547112 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.547125 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.547133 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.566806 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:52 crc kubenswrapper[4853]: E1209 16:56:52.566951 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.650040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.650088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.650111 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.650129 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.650140 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.753120 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.753199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.753236 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.753260 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.753272 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.856138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.856187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.856195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.856213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.856225 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.958084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.958133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.958148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.958169 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:52 crc kubenswrapper[4853]: I1209 16:56:52.958185 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:52Z","lastTransitionTime":"2025-12-09T16:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.060144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.060213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.060228 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.060244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.060256 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.162201 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.162255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.162273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.162299 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.162312 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.269307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.269373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.269391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.269416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.269434 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.372671 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.372738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.372754 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.372778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.372795 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.475826 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.475890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.475902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.475918 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.475930 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.567129 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.567203 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.567509 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:53 crc kubenswrapper[4853]: E1209 16:56:53.567731 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:53 crc kubenswrapper[4853]: E1209 16:56:53.567425 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:53 crc kubenswrapper[4853]: E1209 16:56:53.567955 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.578385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.578476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.578497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.578526 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.578548 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.590911 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.611084 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.624830 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.649461 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"message\\\":\\\"ft-multus/multus-fmrzg\\\\nI1209 16:56:48.585821 6310 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1209 16:56:48.586023 6310 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1209 16:56:48.586023 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z]\\\\nI1209 16:56:48.586029 6310 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1209 16:56:48.586034 6310 obj_retry.go:386] Retry successful for *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.667497 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.681417 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.681454 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.681464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.681427 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.681480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.681688 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.709078 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.726856 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.740652 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.754569 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.766995 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.781406 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.783956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.783985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.783995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.784008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.784017 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.796331 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.811690 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.822363 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.839333 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.851279 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:53Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.886964 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.887022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.887034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.887052 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.887065 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.989857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.989936 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.989962 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.989991 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:53 crc kubenswrapper[4853]: I1209 16:56:53.990010 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:53Z","lastTransitionTime":"2025-12-09T16:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.067700 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:54 crc kubenswrapper[4853]: E1209 16:56:54.067857 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:54 crc kubenswrapper[4853]: E1209 16:56:54.067922 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs podName:7d55def8-578d-461b-9514-07eea9c62336 nodeName:}" failed. No retries permitted until 2025-12-09 16:56:58.067903289 +0000 UTC m=+45.002642471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs") pod "network-metrics-daemon-77995" (UID: "7d55def8-578d-461b-9514-07eea9c62336") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.092698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.092752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.092762 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.092778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.092790 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.195788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.195850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.195862 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.195879 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.195891 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.298132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.298195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.298215 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.298241 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.298262 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.400977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.401054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.401074 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.401098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.401114 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.504245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.504342 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.504366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.504395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.504419 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.566499 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:54 crc kubenswrapper[4853]: E1209 16:56:54.566730 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.606432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.606482 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.606493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.606511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.606523 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.709521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.709591 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.709674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.709705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.709727 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.812286 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.812332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.812358 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.812372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.812381 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.915316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.915377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.915386 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.915399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.915409 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.981781 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.981825 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.981834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.981849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.981858 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:54 crc kubenswrapper[4853]: E1209 16:56:54.994166 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:54Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.998366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.998407 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.998419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.998435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:54 crc kubenswrapper[4853]: I1209 16:56:54.998452 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:54Z","lastTransitionTime":"2025-12-09T16:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: E1209 16:56:55.011557 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:55Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.016027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.016079 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.016094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.016114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.016164 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: E1209 16:56:55.033983 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:55Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.039297 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.039373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.039395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.039426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.039447 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: E1209 16:56:55.058243 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:55Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.062879 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.062932 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.062948 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.062971 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.062991 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: E1209 16:56:55.077162 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:55Z is after 2025-08-24T17:21:41Z" Dec 09 16:56:55 crc kubenswrapper[4853]: E1209 16:56:55.077489 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.079218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.079301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.079320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.079346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.079363 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.181845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.181968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.181995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.182027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.182047 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.283792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.283846 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.283859 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.283875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.283888 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.387048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.387116 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.387137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.387167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.387192 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.489007 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.489067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.489085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.489107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.489125 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.567213 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.567277 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:55 crc kubenswrapper[4853]: E1209 16:56:55.567380 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.567410 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:55 crc kubenswrapper[4853]: E1209 16:56:55.567546 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:55 crc kubenswrapper[4853]: E1209 16:56:55.567731 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.591909 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.591967 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.591983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.592008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.592027 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.694118 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.694155 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.694164 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.694181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.694191 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.796378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.796432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.796441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.796458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.796468 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.899019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.899062 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.899073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.899090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:55 crc kubenswrapper[4853]: I1209 16:56:55.899102 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:55Z","lastTransitionTime":"2025-12-09T16:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.002004 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.002070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.002084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.002109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.002124 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.105886 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.105971 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.105989 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.106013 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.106030 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.208444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.208516 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.208538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.208567 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.208590 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.311219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.311258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.311267 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.311279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.311287 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.413702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.413747 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.413754 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.413770 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.413780 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.516538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.516588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.516623 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.516641 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.516653 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.566509 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:56 crc kubenswrapper[4853]: E1209 16:56:56.566732 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.620010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.620082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.620097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.620117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.620138 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.722922 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.722988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.723005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.723028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.723045 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.825902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.825957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.825971 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.825989 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.826001 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.928402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.928472 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.928489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.928513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:56 crc kubenswrapper[4853]: I1209 16:56:56.928531 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:56Z","lastTransitionTime":"2025-12-09T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.030989 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.031046 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.031058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.031082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.031098 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.133721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.133769 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.133781 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.133796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.133808 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.237693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.237785 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.237810 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.237861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.237879 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.340672 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.340726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.340738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.340753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.340764 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.442808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.442847 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.442855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.442870 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.442880 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.545877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.545929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.545945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.545964 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.545977 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.566350 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:57 crc kubenswrapper[4853]: E1209 16:56:57.566461 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.566520 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.566578 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:57 crc kubenswrapper[4853]: E1209 16:56:57.566632 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:57 crc kubenswrapper[4853]: E1209 16:56:57.566826 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.648420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.648478 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.648488 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.648501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.648510 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.751569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.751654 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.751668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.751687 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.751701 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.854348 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.854432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.854462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.854489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.854506 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.957441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.957519 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.957542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.957571 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:57 crc kubenswrapper[4853]: I1209 16:56:57.957592 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:57Z","lastTransitionTime":"2025-12-09T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.061040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.061106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.061123 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.061150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.061168 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.109170 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:58 crc kubenswrapper[4853]: E1209 16:56:58.109379 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:58 crc kubenswrapper[4853]: E1209 16:56:58.109458 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs podName:7d55def8-578d-461b-9514-07eea9c62336 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:06.109435324 +0000 UTC m=+53.044174516 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs") pod "network-metrics-daemon-77995" (UID: "7d55def8-578d-461b-9514-07eea9c62336") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.164366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.164443 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.164461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.164493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.164517 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.267695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.267760 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.267769 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.267784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.267793 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.371469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.371516 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.371525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.371543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.371558 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.475387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.475435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.475444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.475459 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.475468 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.567094 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:56:58 crc kubenswrapper[4853]: E1209 16:56:58.567339 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.577958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.578018 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.578040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.578065 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.578085 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.682264 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.682365 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.682396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.682416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.682453 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.785258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.785392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.785450 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.785473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.785495 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.887953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.887994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.888006 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.888023 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.888042 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.990625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.990663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.990672 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.990685 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:58 crc kubenswrapper[4853]: I1209 16:56:58.990693 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:58Z","lastTransitionTime":"2025-12-09T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.092872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.092910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.092920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.092934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.092945 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:59Z","lastTransitionTime":"2025-12-09T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.196044 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.196077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.196088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.196115 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.196126 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:59Z","lastTransitionTime":"2025-12-09T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.299389 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.299496 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.299522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.299552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.299573 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:59Z","lastTransitionTime":"2025-12-09T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.402587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.402723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.402746 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.402778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.402798 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:59Z","lastTransitionTime":"2025-12-09T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.505359 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.505419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.505436 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.505461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.505476 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:59Z","lastTransitionTime":"2025-12-09T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.566384 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.566440 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.566503 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:56:59 crc kubenswrapper[4853]: E1209 16:56:59.566546 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:56:59 crc kubenswrapper[4853]: E1209 16:56:59.566760 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:56:59 crc kubenswrapper[4853]: E1209 16:56:59.566951 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.608977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.609021 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.609227 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.609254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.609276 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:59Z","lastTransitionTime":"2025-12-09T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.711946 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.711988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.712000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.712018 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.712031 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:59Z","lastTransitionTime":"2025-12-09T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.815096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.815218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.815234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.815250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.815262 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:59Z","lastTransitionTime":"2025-12-09T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.921786 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.921847 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.921861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.921880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:56:59 crc kubenswrapper[4853]: I1209 16:56:59.921892 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:56:59Z","lastTransitionTime":"2025-12-09T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.024391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.024433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.024441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.024455 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.024464 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.126725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.126783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.126795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.126812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.126824 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.229092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.229144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.229155 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.229176 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.229188 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.331197 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.331270 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.331289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.331312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.331329 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.434162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.434217 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.434238 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.434265 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.434288 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.537313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.537364 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.537381 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.537406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.537424 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.566439 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:00 crc kubenswrapper[4853]: E1209 16:57:00.566691 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.640723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.640794 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.640815 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.640839 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.640855 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.743129 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.743173 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.743184 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.743221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.743233 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.846523 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.846584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.846645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.846685 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.846707 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.948673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.948709 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.948720 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.948735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:00 crc kubenswrapper[4853]: I1209 16:57:00.948745 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:00Z","lastTransitionTime":"2025-12-09T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.051690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.051767 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.051784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.051808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.051825 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.155284 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.155357 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.155372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.155392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.155407 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.257937 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.258229 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.258259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.258282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.258298 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.360653 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.360716 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.360734 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.360758 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.360776 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.463843 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.463891 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.463904 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.463920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.463933 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.566073 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.566086 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:01 crc kubenswrapper[4853]: E1209 16:57:01.566249 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.566304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.566418 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.566442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.566470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: E1209 16:57:01.566345 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.566491 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.566100 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:01 crc kubenswrapper[4853]: E1209 16:57:01.566680 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.668886 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.668950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.668966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.668987 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.668999 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.771591 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.771647 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.771656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.771672 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.771683 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.874195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.874269 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.874283 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.874298 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.874311 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.977000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.977055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.977066 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.977083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:01 crc kubenswrapper[4853]: I1209 16:57:01.977094 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:01Z","lastTransitionTime":"2025-12-09T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.079796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.079831 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.079839 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.079850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.079859 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:02Z","lastTransitionTime":"2025-12-09T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.182995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.183061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.183078 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.183126 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.183145 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:02Z","lastTransitionTime":"2025-12-09T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.285940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.286036 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.286056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.286083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.286100 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:02Z","lastTransitionTime":"2025-12-09T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.388678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.388714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.388722 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.388751 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.388763 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:02Z","lastTransitionTime":"2025-12-09T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.490415 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.490446 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.490454 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.490466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.490475 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:02Z","lastTransitionTime":"2025-12-09T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.566273 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:02 crc kubenswrapper[4853]: E1209 16:57:02.566396 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.593666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.593720 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.593733 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.593750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.593769 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:02Z","lastTransitionTime":"2025-12-09T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.696977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.697239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.697373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.697470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.697560 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:02Z","lastTransitionTime":"2025-12-09T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.800351 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.800584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.800680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.800863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.800963 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:02Z","lastTransitionTime":"2025-12-09T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.903583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.903652 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.903663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.903678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:02 crc kubenswrapper[4853]: I1209 16:57:02.903689 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:02Z","lastTransitionTime":"2025-12-09T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.005554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.005584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.005621 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.005646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.005657 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.108043 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.108077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.108086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.108101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.108111 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.210367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.210416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.210431 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.210452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.210468 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.313292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.313349 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.313366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.313390 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.313407 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.373910 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.374030 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.374090 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.374135 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:57:35.374110062 +0000 UTC m=+82.308849244 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.374157 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.374215 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.374218 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:35.374204024 +0000 UTC m=+82.308943206 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.374257 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:35.374244755 +0000 UTC m=+82.308983947 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.415786 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.415870 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.415905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.415935 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.415956 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.475096 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.475141 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.475267 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.475283 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.475295 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.475345 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:35.475332141 +0000 UTC m=+82.410071323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.475373 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.475427 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.475454 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.475564 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:35.475531986 +0000 UTC m=+82.410271198 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.518570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.518682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.518707 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.518737 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.518762 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.566796 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.566845 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.566979 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.567224 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.567409 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:03 crc kubenswrapper[4853]: E1209 16:57:03.567561 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.584042 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.604868 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.619675 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.621750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.621799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.621816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.621840 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.621855 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.638215 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.657377 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.678974 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.693916 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.710379 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.724234 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.724938 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.725097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.725122 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.725150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.725168 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.759066 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"message\\\":\\\"ft-multus/multus-fmrzg\\\\nI1209 16:56:48.585821 6310 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1209 16:56:48.586023 6310 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1209 16:56:48.586023 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z]\\\\nI1209 16:56:48.586029 6310 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1209 16:56:48.586034 6310 obj_retry.go:386] Retry successful for *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.776703 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.797516 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.814391 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.826894 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.827079 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.827108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.827119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.827136 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.827147 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.839931 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.861061 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.874667 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:03Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.930245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.930294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.930305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.930321 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:03 crc kubenswrapper[4853]: I1209 16:57:03.930333 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:03Z","lastTransitionTime":"2025-12-09T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.033275 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.034003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.034017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.034031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.034040 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.137085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.137127 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.137138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.137157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.137169 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.240218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.240277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.240298 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.240341 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.240361 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.342945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.342985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.342995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.343011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.343020 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.444956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.444996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.445010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.445028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.445041 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.546815 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.546849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.546857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.546869 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.546878 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.566506 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:04 crc kubenswrapper[4853]: E1209 16:57:04.566920 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.567344 4853 scope.go:117] "RemoveContainer" containerID="26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.649753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.650056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.650068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.650082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.650111 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.752965 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.753015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.753030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.753048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.753060 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.855439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.855480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.855487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.855501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.855511 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.875184 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/1.log" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.877823 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.878477 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.896943 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:04Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.908078 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:04Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.918117 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:04Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.930620 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:04Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.945440 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:04Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.953476 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:04Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.958617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.958681 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.958693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.958709 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.958720 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:04Z","lastTransitionTime":"2025-12-09T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.972061 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"message\\\":\\\"ft-multus/multus-fmrzg\\\\nI1209 16:56:48.585821 6310 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1209 16:56:48.586023 6310 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1209 16:56:48.586023 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z]\\\\nI1209 16:56:48.586029 6310 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1209 16:56:48.586034 6310 obj_retry.go:386] Retry successful for *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:04Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:04 crc kubenswrapper[4853]: I1209 16:57:04.985558 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:04Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.013849 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.029318 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.046933 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.061616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.061660 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.061670 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.061686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.061697 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.064753 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.075571 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.086869 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.098194 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.109886 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.118075 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.163896 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.163942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.163954 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.163971 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.163984 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.266039 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.266077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.266088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.266106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.266116 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.298095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.298150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.298163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.298187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.298204 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.312646 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.316957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.317019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.317036 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.317055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.317068 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.327191 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.331896 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.331945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.331955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.331970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.331978 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.342651 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.346764 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.346829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.346844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.346885 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.346899 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.358489 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.361933 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.361982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.361995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.362016 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.362030 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.372496 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.372677 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.374167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.374223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.374237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.374254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.374266 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.435814 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.445231 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.448459 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.462993 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.476764 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.476805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.476815 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.476831 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.476844 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.477503 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.490322 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.502977 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.513670 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.527881 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.541179 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.554533 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.564912 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.566392 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.566515 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.566823 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.567052 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.566927 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.567303 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.581410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.581698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.581835 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.581952 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.582046 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.583478 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"message\\\":\\\"ft-multus/multus-fmrzg\\\\nI1209 16:56:48.585821 6310 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1209 16:56:48.586023 6310 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1209 16:56:48.586023 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z]\\\\nI1209 16:56:48.586029 6310 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1209 16:56:48.586034 6310 obj_retry.go:386] Retry successful for *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.596546 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.607139 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.621438 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.634091 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.646900 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.666158 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.684587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.684667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.684678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.684695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.684705 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.787143 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.787180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.787192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.787208 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.787219 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.884106 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/2.log" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.885252 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/1.log" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.888791 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435" exitCode=1 Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.888873 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.888967 4853 scope.go:117] "RemoveContainer" containerID="26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.889571 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.889667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.889692 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.889576 4853 scope.go:117] "RemoveContainer" containerID="5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.890274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: E1209 16:57:05.890296 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.890320 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.910833 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.930204 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.948916 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.969771 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.983472 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:05Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.992527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.992591 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.992634 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.992663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:05 crc kubenswrapper[4853]: I1209 16:57:05.992686 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:05Z","lastTransitionTime":"2025-12-09T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.003069 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.016404 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.032035 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.047295 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.058257 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.075355 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.085836 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.095408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.095466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.095480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.095500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.095519 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:06Z","lastTransitionTime":"2025-12-09T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.107063 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ded5341633824cebb781032747b2a142289b4b2b9ed8cd2f25939d3b4b88af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"message\\\":\\\"ft-multus/multus-fmrzg\\\\nI1209 16:56:48.585821 6310 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1209 16:56:48.586023 6310 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1209 16:56:48.586023 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:56:48Z is after 2025-08-24T17:21:41Z]\\\\nI1209 16:56:48.586029 6310 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1209 16:56:48.586034 6310 obj_retry.go:386] Retry successful for *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.127499 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.142552 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.155984 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.166994 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.178220 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.197864 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.197914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.197940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.197967 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.197991 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:06Z","lastTransitionTime":"2025-12-09T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.210432 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:06 crc kubenswrapper[4853]: E1209 16:57:06.210640 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:57:06 crc kubenswrapper[4853]: E1209 16:57:06.210893 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs podName:7d55def8-578d-461b-9514-07eea9c62336 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:22.210870922 +0000 UTC m=+69.145610104 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs") pod "network-metrics-daemon-77995" (UID: "7d55def8-578d-461b-9514-07eea9c62336") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.301553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.301627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.301640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.301656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.301667 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:06Z","lastTransitionTime":"2025-12-09T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.404274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.404334 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.404351 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.404373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.404388 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:06Z","lastTransitionTime":"2025-12-09T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.507489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.507541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.507553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.507569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.507582 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:06Z","lastTransitionTime":"2025-12-09T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.566817 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:06 crc kubenswrapper[4853]: E1209 16:57:06.567031 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.610452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.610517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.610533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.610562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.610580 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:06Z","lastTransitionTime":"2025-12-09T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.714202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.714261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.714343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.714447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.714528 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:06Z","lastTransitionTime":"2025-12-09T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.817704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.817783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.817821 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.817850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.817871 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:06Z","lastTransitionTime":"2025-12-09T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.896829 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/2.log" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.901744 4853 scope.go:117] "RemoveContainer" containerID="5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435" Dec 09 16:57:06 crc kubenswrapper[4853]: E1209 16:57:06.902019 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.917371 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.919484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.919525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.919535 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.919551 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.919563 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:06Z","lastTransitionTime":"2025-12-09T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.932867 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.946351 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.961747 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.979222 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:06 crc kubenswrapper[4853]: I1209 16:57:06.997202 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:06Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.010405 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.022292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.022335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.022350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.022370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.022385 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.036970 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.056535 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.077858 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.092833 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.103668 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.112343 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.122391 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.124895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.124941 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.124953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.124970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.124981 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.138683 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.149706 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.161829 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.173058 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:07Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.226949 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.226988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.226998 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.227012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.227027 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.330117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.330245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.330269 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.330298 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.330324 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.433058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.433105 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.433118 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.433136 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.433149 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.535811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.535874 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.535895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.535929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.535962 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.566432 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.566484 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:07 crc kubenswrapper[4853]: E1209 16:57:07.566588 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.566740 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:07 crc kubenswrapper[4853]: E1209 16:57:07.566802 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:07 crc kubenswrapper[4853]: E1209 16:57:07.566893 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.638946 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.639003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.639020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.639047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.639065 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.742120 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.742186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.742205 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.742227 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.742242 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.844698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.844756 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.844768 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.844786 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.844798 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.947585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.947659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.947673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.947692 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:07 crc kubenswrapper[4853]: I1209 16:57:07.947705 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:07Z","lastTransitionTime":"2025-12-09T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.050536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.050577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.050587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.050618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.050631 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.153080 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.153123 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.153133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.153148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.153159 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.255333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.255374 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.255391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.255426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.255439 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.358990 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.359027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.359035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.359050 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.359061 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.460891 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.460949 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.460966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.460989 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.461005 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.564391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.564673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.564686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.564701 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.564712 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.566810 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:08 crc kubenswrapper[4853]: E1209 16:57:08.567003 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.667114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.667155 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.667166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.667178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.667189 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.769861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.769966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.769996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.770027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.770054 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.872873 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.872907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.872918 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.872932 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.872943 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.975221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.975277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.975291 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.975307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:08 crc kubenswrapper[4853]: I1209 16:57:08.975320 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:08Z","lastTransitionTime":"2025-12-09T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.077901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.077960 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.077979 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.078002 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.078019 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:09Z","lastTransitionTime":"2025-12-09T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.180401 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.180452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.180471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.180493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.180509 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:09Z","lastTransitionTime":"2025-12-09T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.283118 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.283173 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.283230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.283268 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.283292 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:09Z","lastTransitionTime":"2025-12-09T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.385702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.385756 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.385772 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.385796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.385812 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:09Z","lastTransitionTime":"2025-12-09T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.487657 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.487728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.487750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.487779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.487801 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:09Z","lastTransitionTime":"2025-12-09T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.566267 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.566390 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.566279 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:09 crc kubenswrapper[4853]: E1209 16:57:09.566461 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:09 crc kubenswrapper[4853]: E1209 16:57:09.566632 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:09 crc kubenswrapper[4853]: E1209 16:57:09.566826 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.590030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.590101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.590121 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.590148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.590171 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:09Z","lastTransitionTime":"2025-12-09T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.692419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.692468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.692480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.692497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.692509 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:09Z","lastTransitionTime":"2025-12-09T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.795469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.795525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.795533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.795546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.795557 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:09Z","lastTransitionTime":"2025-12-09T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.899059 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.899144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.899171 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.899203 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:09 crc kubenswrapper[4853]: I1209 16:57:09.899226 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:09Z","lastTransitionTime":"2025-12-09T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.001684 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.001764 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.001777 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.001791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.001802 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.104074 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.104112 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.104120 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.104156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.104168 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.206352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.206462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.206473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.206487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.206496 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.309416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.309455 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.309464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.309477 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.309485 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.412424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.412497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.412517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.412538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.412555 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.515269 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.515336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.515346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.515378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.515398 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.566346 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:10 crc kubenswrapper[4853]: E1209 16:57:10.566463 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.618215 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.618284 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.618308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.618338 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.618360 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.720471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.720541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.720558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.720583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.720627 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.823929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.823977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.823993 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.824015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.824032 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.926004 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.926083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.926105 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.926133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:10 crc kubenswrapper[4853]: I1209 16:57:10.926155 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:10Z","lastTransitionTime":"2025-12-09T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.029633 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.029694 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.029719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.029750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.029772 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.131909 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.131961 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.131977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.132003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.132020 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.234772 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.234832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.234846 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.234865 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.234877 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.337185 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.337238 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.337253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.337271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.337285 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.440042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.440101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.440119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.440141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.440158 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.542352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.542406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.542420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.542465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.542481 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.566417 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.566524 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.566655 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:11 crc kubenswrapper[4853]: E1209 16:57:11.566676 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:11 crc kubenswrapper[4853]: E1209 16:57:11.566778 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:11 crc kubenswrapper[4853]: E1209 16:57:11.566894 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.644641 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.644710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.644730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.644750 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.644761 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.746966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.747015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.747025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.747040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.747051 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.850873 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.850942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.850950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.850994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.851005 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.954174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.954261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.954469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.954559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:11 crc kubenswrapper[4853]: I1209 16:57:11.954587 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:11Z","lastTransitionTime":"2025-12-09T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.057890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.057954 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.057971 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.057999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.058018 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.160716 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.160807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.160826 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.160855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.160873 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.263715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.263767 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.263780 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.263799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.263812 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.366178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.366219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.366231 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.366270 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.366284 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.468056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.468127 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.468136 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.468153 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.468164 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.566574 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:12 crc kubenswrapper[4853]: E1209 16:57:12.566896 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.570492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.570541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.570553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.570570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.570582 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.672328 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.672371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.672379 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.672393 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.672403 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.774753 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.774789 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.774799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.774811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.774820 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.876972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.877012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.877025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.877041 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.877053 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.979153 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.979212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.979220 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.979234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:12 crc kubenswrapper[4853]: I1209 16:57:12.979243 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:12Z","lastTransitionTime":"2025-12-09T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.081980 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.082069 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.082091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.082122 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.082143 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:13Z","lastTransitionTime":"2025-12-09T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.184358 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.184434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.184455 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.184483 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.184507 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:13Z","lastTransitionTime":"2025-12-09T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.287827 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.287899 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.287911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.287951 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.287964 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:13Z","lastTransitionTime":"2025-12-09T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.390369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.390430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.390444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.390461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.390473 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:13Z","lastTransitionTime":"2025-12-09T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.493522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.493582 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.493646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.493678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.493705 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:13Z","lastTransitionTime":"2025-12-09T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.566376 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.566566 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:13 crc kubenswrapper[4853]: E1209 16:57:13.566778 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.566812 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:13 crc kubenswrapper[4853]: E1209 16:57:13.566939 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:13 crc kubenswrapper[4853]: E1209 16:57:13.567038 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.582903 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.596231 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.596280 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.596289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.596305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.596334 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:13Z","lastTransitionTime":"2025-12-09T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.598562 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.612496 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.630264 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.642487 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.662479 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.677086 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.692571 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.698721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.698851 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.698934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.699017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.699099 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:13Z","lastTransitionTime":"2025-12-09T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.705483 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.723897 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.739578 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.754348 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.779880 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.801757 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.801814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.801831 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.801853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.801870 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:13Z","lastTransitionTime":"2025-12-09T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.808181 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.823407 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.838523 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.855298 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.870036 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:13Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.904440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.904481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.904492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.904511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:13 crc kubenswrapper[4853]: I1209 16:57:13.904525 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:13Z","lastTransitionTime":"2025-12-09T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.007828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.007895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.007907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.007925 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.007939 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.109923 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.109972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.109983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.109999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.110010 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.212391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.212433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.212446 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.212462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.212472 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.315139 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.315191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.315203 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.315223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.315236 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.417938 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.417996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.418010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.418030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.418046 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.521302 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.521386 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.521400 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.521425 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.521439 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.567352 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:14 crc kubenswrapper[4853]: E1209 16:57:14.567584 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.624539 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.624589 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.624627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.624648 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.624660 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.727310 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.727384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.727396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.727428 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.727442 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.830327 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.830371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.830379 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.830393 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.830402 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.932966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.933008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.933019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.933034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:14 crc kubenswrapper[4853]: I1209 16:57:14.933045 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:14Z","lastTransitionTime":"2025-12-09T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.036166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.036216 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.036226 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.036242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.036253 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.138580 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.138651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.138663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.138680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.138693 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.241579 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.241684 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.241702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.241729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.241763 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.344028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.344072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.344087 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.344104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.344115 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.445977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.446027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.446039 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.446056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.446069 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.547977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.548053 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.548086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.548103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.548113 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.566400 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:15 crc kubenswrapper[4853]: E1209 16:57:15.566531 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.566612 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:15 crc kubenswrapper[4853]: E1209 16:57:15.566683 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.566695 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:15 crc kubenswrapper[4853]: E1209 16:57:15.566861 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.630910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.630947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.630955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.630970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.630980 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: E1209 16:57:15.648637 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:15Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.653109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.653168 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.653190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.653217 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.653238 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: E1209 16:57:15.670506 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:15Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.675001 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.675047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.675059 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.675078 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.675091 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: E1209 16:57:15.688445 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:15Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.694395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.694437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.694448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.694464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.694475 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: E1209 16:57:15.712106 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:15Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.719633 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.719726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.719742 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.719763 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.719781 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: E1209 16:57:15.736385 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:15Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:15 crc kubenswrapper[4853]: E1209 16:57:15.736581 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.738621 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.738664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.738675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.738690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.738701 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.842263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.842309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.842322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.842352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.842366 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.945199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.945248 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.945262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.945278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:15 crc kubenswrapper[4853]: I1209 16:57:15.945288 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:15Z","lastTransitionTime":"2025-12-09T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.048752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.048803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.048819 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.048836 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.048854 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.151360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.151417 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.151441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.151470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.151492 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.253568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.253683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.253698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.253712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.253722 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.356866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.356926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.356936 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.356956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.356969 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.460117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.460179 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.460191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.460210 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.460222 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.563040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.563078 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.563228 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.563329 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.563345 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.566568 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:16 crc kubenswrapper[4853]: E1209 16:57:16.566842 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.665492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.665590 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.665631 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.665650 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.665663 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.768030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.768097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.768111 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.768135 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.768148 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.871240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.871309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.871321 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.871341 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.871355 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.973323 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.973400 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.973421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.973445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:16 crc kubenswrapper[4853]: I1209 16:57:16.973462 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:16Z","lastTransitionTime":"2025-12-09T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.076092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.076133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.076147 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.076165 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.076178 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:17Z","lastTransitionTime":"2025-12-09T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.178784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.178834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.178844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.178858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.178868 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:17Z","lastTransitionTime":"2025-12-09T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.280680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.280729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.280740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.280758 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.280769 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:17Z","lastTransitionTime":"2025-12-09T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.385905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.385939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.385950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.385966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.385980 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:17Z","lastTransitionTime":"2025-12-09T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.488615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.488653 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.488664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.488679 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.488688 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:17Z","lastTransitionTime":"2025-12-09T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.567241 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:17 crc kubenswrapper[4853]: E1209 16:57:17.567386 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.567244 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.567481 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:17 crc kubenswrapper[4853]: E1209 16:57:17.567533 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:17 crc kubenswrapper[4853]: E1209 16:57:17.567644 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.591284 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.591323 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.591333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.591350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.591360 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:17Z","lastTransitionTime":"2025-12-09T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.694263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.694305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.694315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.694331 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.694342 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:17Z","lastTransitionTime":"2025-12-09T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.796944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.796991 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.797003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.797019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.797031 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:17Z","lastTransitionTime":"2025-12-09T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.900104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.900348 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.900357 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.900370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:17 crc kubenswrapper[4853]: I1209 16:57:17.900379 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:17Z","lastTransitionTime":"2025-12-09T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.003679 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.003720 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.003729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.003745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.003754 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.105901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.105960 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.105978 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.106002 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.106018 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.208976 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.209024 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.209034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.209050 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.209058 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.311303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.311335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.311344 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.311356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.311365 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.414117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.414369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.414453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.414546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.414645 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.517658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.517746 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.517782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.517816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.517840 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.567083 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:18 crc kubenswrapper[4853]: E1209 16:57:18.567222 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.567913 4853 scope.go:117] "RemoveContainer" containerID="5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435" Dec 09 16:57:18 crc kubenswrapper[4853]: E1209 16:57:18.568179 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.621105 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.621149 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.621161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.621177 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.621188 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.724023 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.724081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.724097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.724117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.724133 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.827697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.827771 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.827783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.827801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.827814 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.930533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.930584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.930631 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.930657 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:18 crc kubenswrapper[4853]: I1209 16:57:18.930686 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:18Z","lastTransitionTime":"2025-12-09T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.033548 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.033629 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.033640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.033659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.033672 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.135745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.135783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.135792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.135805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.135817 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.238878 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.238925 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.238941 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.238962 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.238977 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.342218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.342302 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.342313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.342353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.342365 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.445048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.445090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.445103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.445164 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.445178 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.547544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.547576 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.547589 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.547627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.547639 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.567043 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.567059 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.567160 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:19 crc kubenswrapper[4853]: E1209 16:57:19.567290 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:19 crc kubenswrapper[4853]: E1209 16:57:19.567347 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:19 crc kubenswrapper[4853]: E1209 16:57:19.567432 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.651189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.651231 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.651242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.651256 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.651266 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.753545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.753576 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.753583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.753610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.753620 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.856326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.856367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.856378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.856392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.856403 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.958513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.958552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.958562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.958576 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:19 crc kubenswrapper[4853]: I1209 16:57:19.958587 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:19Z","lastTransitionTime":"2025-12-09T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.061224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.061254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.061262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.061274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.061283 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.163103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.163147 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.163158 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.163173 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.163184 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.265450 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.265490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.265499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.265511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.265521 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.368394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.368466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.368482 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.368501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.368521 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.470814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.470866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.470883 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.470903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.470918 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.566522 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:20 crc kubenswrapper[4853]: E1209 16:57:20.566701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.573010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.573042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.573052 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.573066 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.573087 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.675287 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.675324 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.675332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.675346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.675354 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.778340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.778385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.778394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.778412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.778430 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.881616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.881656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.881665 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.881681 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.881691 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.984525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.984559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.984571 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.984587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:20 crc kubenswrapper[4853]: I1209 16:57:20.984619 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:20Z","lastTransitionTime":"2025-12-09T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.087248 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.087319 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.087335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.087351 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.087364 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:21Z","lastTransitionTime":"2025-12-09T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.189306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.189352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.189361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.189377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.189387 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:21Z","lastTransitionTime":"2025-12-09T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.291671 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.291713 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.291725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.291741 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.291753 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:21Z","lastTransitionTime":"2025-12-09T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.394625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.394672 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.394685 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.394702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.394714 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:21Z","lastTransitionTime":"2025-12-09T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.496709 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.496749 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.496761 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.496778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.496790 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:21Z","lastTransitionTime":"2025-12-09T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.566663 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:21 crc kubenswrapper[4853]: E1209 16:57:21.566775 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.566832 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.566905 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:21 crc kubenswrapper[4853]: E1209 16:57:21.566986 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:21 crc kubenswrapper[4853]: E1209 16:57:21.567143 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.577173 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.598677 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.598725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.598736 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.598752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.598763 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:21Z","lastTransitionTime":"2025-12-09T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.701527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.701565 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.701576 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.701592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.701622 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:21Z","lastTransitionTime":"2025-12-09T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.804901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.804988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.805009 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.805032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.805050 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:21Z","lastTransitionTime":"2025-12-09T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.907500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.907563 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.907586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.907663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:21 crc kubenswrapper[4853]: I1209 16:57:21.907688 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:21Z","lastTransitionTime":"2025-12-09T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.010162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.010204 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.010216 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.010233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.010248 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.112888 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.112975 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.112983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.112997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.113005 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.215493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.215531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.215540 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.215554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.215565 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.282349 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:22 crc kubenswrapper[4853]: E1209 16:57:22.282572 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:57:22 crc kubenswrapper[4853]: E1209 16:57:22.282697 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs podName:7d55def8-578d-461b-9514-07eea9c62336 nodeName:}" failed. No retries permitted until 2025-12-09 16:57:54.282672619 +0000 UTC m=+101.217411861 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs") pod "network-metrics-daemon-77995" (UID: "7d55def8-578d-461b-9514-07eea9c62336") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.317447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.317478 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.317486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.317498 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.317507 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.420003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.420073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.420089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.420113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.420130 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.523094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.523143 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.523155 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.523172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.523184 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.566782 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:22 crc kubenswrapper[4853]: E1209 16:57:22.566958 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.625508 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.625552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.625569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.625589 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.625629 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.727819 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.727867 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.727880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.727897 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.727908 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.830358 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.830408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.830422 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.830441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.830456 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.934366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.934457 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.934490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.934537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:22 crc kubenswrapper[4853]: I1209 16:57:22.934563 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:22Z","lastTransitionTime":"2025-12-09T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.036680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.036711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.036719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.036732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.036741 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.139326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.139378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.139390 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.139409 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.139421 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.242197 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.242271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.242293 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.242319 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.242335 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.345180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.345225 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.345239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.345256 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.345268 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.448152 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.448192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.448203 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.448219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.448230 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.551025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.551252 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.551316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.551375 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.551438 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.566444 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.566507 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:23 crc kubenswrapper[4853]: E1209 16:57:23.566654 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.566556 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:23 crc kubenswrapper[4853]: E1209 16:57:23.566798 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:23 crc kubenswrapper[4853]: E1209 16:57:23.566861 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.584141 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.595583 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.606559 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.615479 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.635750 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.648262 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.652880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.652911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.652921 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.652936 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.652947 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.659959 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.670056 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30714c24-9b34-423c-843b-c3a5c744a7ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a099bf0e0e1e9d623a1334d0923cffae6fe94b736206abeedb078922448ca86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.683489 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.694579 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.707877 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.719583 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.746172 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.754829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.754914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.754925 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.754944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.754956 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.760740 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.772133 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.785119 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.794537 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.805778 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.819819 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.857618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.857658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.857668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.857683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.857694 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.959189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.959221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.959234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.959246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.959255 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:23Z","lastTransitionTime":"2025-12-09T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.961054 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmrzg_8b02f072-d8cc-4c46-8159-fe99d19b24a6/kube-multus/0.log" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.961098 4853 generic.go:334] "Generic (PLEG): container finished" podID="8b02f072-d8cc-4c46-8159-fe99d19b24a6" containerID="9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc" exitCode=1 Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.961126 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmrzg" event={"ID":"8b02f072-d8cc-4c46-8159-fe99d19b24a6","Type":"ContainerDied","Data":"9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc"} Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.961505 4853 scope.go:117] "RemoveContainer" containerID="9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.975739 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.986935 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:23 crc kubenswrapper[4853]: I1209 16:57:23.999401 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:23Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.011837 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.025809 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:23Z\\\",\\\"message\\\":\\\"2025-12-09T16:56:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f\\\\n2025-12-09T16:56:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f to /host/opt/cni/bin/\\\\n2025-12-09T16:56:38Z [verbose] multus-daemon started\\\\n2025-12-09T16:56:38Z [verbose] Readiness Indicator file check\\\\n2025-12-09T16:57:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.038452 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.050637 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.060992 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.061028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.061036 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.061051 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.061061 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.062140 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.083662 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.095511 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.107808 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.119835 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30714c24-9b34-423c-843b-c3a5c744a7ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a099bf0e0e1e9d623a1334d0923cffae6fe94b736206abeedb078922448ca86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.133115 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.145206 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.156511 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.163401 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.163431 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.163440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.163471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.163481 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.169233 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.191058 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.204294 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.216423 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.266312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.266378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.266396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.266412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.266425 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.368433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.368471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.368479 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.368497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.368508 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.471473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.471527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.471540 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.471559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.471574 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.566187 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:24 crc kubenswrapper[4853]: E1209 16:57:24.566309 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.573325 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.573370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.573379 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.573394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.573403 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.679735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.679779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.679791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.679807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.679819 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.782445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.782812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.782955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.783073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.783180 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.885729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.885781 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.885794 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.885811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.885823 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.966828 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmrzg_8b02f072-d8cc-4c46-8159-fe99d19b24a6/kube-multus/0.log" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.966922 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmrzg" event={"ID":"8b02f072-d8cc-4c46-8159-fe99d19b24a6","Type":"ContainerStarted","Data":"3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.980106 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.987729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.987769 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.987782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.987797 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.987809 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:24Z","lastTransitionTime":"2025-12-09T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:24 crc kubenswrapper[4853]: I1209 16:57:24.992531 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:24Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.004501 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.020315 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.042411 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.057020 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.071134 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:23Z\\\",\\\"message\\\":\\\"2025-12-09T16:56:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f\\\\n2025-12-09T16:56:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f to /host/opt/cni/bin/\\\\n2025-12-09T16:56:38Z [verbose] multus-daemon started\\\\n2025-12-09T16:56:38Z [verbose] Readiness Indicator file check\\\\n2025-12-09T16:57:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.093031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.093323 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.093430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.093514 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.093608 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.098289 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.111388 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.127246 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.141986 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.154902 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.169821 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.184592 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.196589 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.196661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.196674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.196691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.196703 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.202279 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.220683 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.234352 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.246981 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.259468 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30714c24-9b34-423c-843b-c3a5c744a7ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a099bf0e0e1e9d623a1334d0923cffae6fe94b736206abeedb078922448ca86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.299486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.299536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.299554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.299574 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.299589 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.402625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.402666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.402677 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.402693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.402703 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.504399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.504442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.504452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.504468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.504479 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.567033 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.567094 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.567171 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:25 crc kubenswrapper[4853]: E1209 16:57:25.567314 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:25 crc kubenswrapper[4853]: E1209 16:57:25.567397 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:25 crc kubenswrapper[4853]: E1209 16:57:25.567544 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.606799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.606848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.606865 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.606883 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.606897 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.709451 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.709498 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.709508 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.709522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.709531 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.778746 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.778789 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.778801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.778819 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.778832 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: E1209 16:57:25.794259 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.798830 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.798869 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.798878 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.798892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.798900 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: E1209 16:57:25.813889 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.817631 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.817666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.817675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.817688 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.817697 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: E1209 16:57:25.835107 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.839041 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.839081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.839092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.839109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.839120 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: E1209 16:57:25.856348 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.860505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.860673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.860760 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.860873 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.860965 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: E1209 16:57:25.875802 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:25Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:25 crc kubenswrapper[4853]: E1209 16:57:25.876227 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.877688 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.877714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.877724 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.877739 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.877750 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.979817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.979841 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.979848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.979861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:25 crc kubenswrapper[4853]: I1209 16:57:25.979870 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:25Z","lastTransitionTime":"2025-12-09T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.081915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.081986 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.082001 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.082016 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.082026 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:26Z","lastTransitionTime":"2025-12-09T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.184027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.184067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.184075 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.184089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.184099 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:26Z","lastTransitionTime":"2025-12-09T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.286111 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.286152 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.286162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.286178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.286189 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:26Z","lastTransitionTime":"2025-12-09T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.388102 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.388133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.388140 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.388153 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.388161 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:26Z","lastTransitionTime":"2025-12-09T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.490512 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.490536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.490544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.490557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.490565 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:26Z","lastTransitionTime":"2025-12-09T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.566184 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:26 crc kubenswrapper[4853]: E1209 16:57:26.566378 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.593402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.593444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.593456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.593473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.593486 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:26Z","lastTransitionTime":"2025-12-09T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.696702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.696769 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.696792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.696819 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.696840 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:26Z","lastTransitionTime":"2025-12-09T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.800251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.800305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.800322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.800346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.800364 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:26Z","lastTransitionTime":"2025-12-09T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.902869 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.902912 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.902922 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.902939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:26 crc kubenswrapper[4853]: I1209 16:57:26.902951 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:26Z","lastTransitionTime":"2025-12-09T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.005339 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.005419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.005442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.005472 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.005496 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:27Z","lastTransitionTime":"2025-12-09T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.201626 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.201665 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.201673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.201689 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.201698 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:27Z","lastTransitionTime":"2025-12-09T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.304851 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.304905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.304916 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.304933 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.304942 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:27Z","lastTransitionTime":"2025-12-09T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.407871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.407913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.407925 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.407941 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.407954 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:27Z","lastTransitionTime":"2025-12-09T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.510028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.510076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.510086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.510101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.510112 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:27Z","lastTransitionTime":"2025-12-09T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.566333 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.566441 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:27 crc kubenswrapper[4853]: E1209 16:57:27.566473 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.573065 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:27 crc kubenswrapper[4853]: E1209 16:57:27.573267 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:27 crc kubenswrapper[4853]: E1209 16:57:27.573407 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.612421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.612458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.612466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.612480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.612490 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:27Z","lastTransitionTime":"2025-12-09T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.715826 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.715878 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.715890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.715914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.715931 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:27Z","lastTransitionTime":"2025-12-09T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.819034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.819114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.819138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.819167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.819187 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:27Z","lastTransitionTime":"2025-12-09T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.922382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.922445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.922454 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.922470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:27 crc kubenswrapper[4853]: I1209 16:57:27.922482 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:27Z","lastTransitionTime":"2025-12-09T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.025622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.025661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.025668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.025682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.025691 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.128290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.128330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.128340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.128355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.128366 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.230403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.230438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.230450 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.230467 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.230478 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.332903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.332933 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.332943 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.332957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.332967 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.435461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.435491 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.435500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.435513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.435523 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.538427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.538463 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.538473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.538487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.538498 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.566465 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:28 crc kubenswrapper[4853]: E1209 16:57:28.566657 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.640854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.640889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.640900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.640916 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.640928 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.744135 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.744191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.744207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.744231 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.744248 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.847656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.847693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.847702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.847745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.847756 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.950189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.950253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.950264 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.950279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:28 crc kubenswrapper[4853]: I1209 16:57:28.950291 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:28Z","lastTransitionTime":"2025-12-09T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.052956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.053007 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.053018 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.053034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.053044 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.155345 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.155382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.155393 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.155408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.155419 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.257965 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.258004 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.258012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.258027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.258036 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.360353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.361167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.361206 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.361234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.361257 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.464360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.464408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.464420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.464435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.464446 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.566251 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:29 crc kubenswrapper[4853]: E1209 16:57:29.566370 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.566433 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:29 crc kubenswrapper[4853]: E1209 16:57:29.566590 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.566664 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:29 crc kubenswrapper[4853]: E1209 16:57:29.566802 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.567569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.567616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.567625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.567638 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.567648 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.670329 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.670371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.670388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.670405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.670415 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.773083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.773126 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.773133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.773148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.773157 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.876072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.876119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.876138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.876166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.876188 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.979221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.979253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.979263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.979300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:29 crc kubenswrapper[4853]: I1209 16:57:29.979311 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:29Z","lastTransitionTime":"2025-12-09T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.082378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.082441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.082452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.082470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.082483 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:30Z","lastTransitionTime":"2025-12-09T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.184423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.184505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.184517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.184537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.184563 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:30Z","lastTransitionTime":"2025-12-09T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.286458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.286497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.286506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.286521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.286531 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:30Z","lastTransitionTime":"2025-12-09T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.389498 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.389551 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.389562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.389580 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.389591 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:30Z","lastTransitionTime":"2025-12-09T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.491575 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.491695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.491708 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.491724 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.491733 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:30Z","lastTransitionTime":"2025-12-09T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.566588 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:30 crc kubenswrapper[4853]: E1209 16:57:30.567074 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.567328 4853 scope.go:117] "RemoveContainer" containerID="5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.594360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.594563 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.594571 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.594584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.594634 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:30Z","lastTransitionTime":"2025-12-09T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.696827 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.696884 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.696901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.696924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.696951 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:30Z","lastTransitionTime":"2025-12-09T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.799752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.799780 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.799790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.799808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.799819 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:30Z","lastTransitionTime":"2025-12-09T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.901742 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.901784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.901795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.901811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.901822 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:30Z","lastTransitionTime":"2025-12-09T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.985152 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/2.log" Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.993861 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b"} Dec 09 16:57:30 crc kubenswrapper[4853]: I1209 16:57:30.994945 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.003490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.003522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.003533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.003547 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.003557 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.011032 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.023186 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.034855 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30714c24-9b34-423c-843b-c3a5c744a7ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a099bf0e0e1e9d623a1334d0923cffae6fe94b736206abeedb078922448ca86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.050211 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.063217 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.075384 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.095066 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.105849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.105901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.105912 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.105929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.105943 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.108355 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.121386 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.146469 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.165562 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.181890 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.194900 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.206030 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.208042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.208075 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.208084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.208096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.208107 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.218926 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.232897 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.246428 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:23Z\\\",\\\"message\\\":\\\"2025-12-09T16:56:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f\\\\n2025-12-09T16:56:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f to /host/opt/cni/bin/\\\\n2025-12-09T16:56:38Z [verbose] multus-daemon started\\\\n2025-12-09T16:56:38Z [verbose] Readiness Indicator file check\\\\n2025-12-09T16:57:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.259820 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.269914 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:31Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.311048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.311097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.311109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.311133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.311146 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.413296 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.413358 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.413378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.413401 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.413418 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.516485 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.516522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.516532 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.516546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.516556 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.566462 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.566488 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.566512 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:31 crc kubenswrapper[4853]: E1209 16:57:31.566660 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:31 crc kubenswrapper[4853]: E1209 16:57:31.566774 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:31 crc kubenswrapper[4853]: E1209 16:57:31.566877 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.618792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.618849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.618865 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.618890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.618912 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.721868 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.721920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.721936 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.721963 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.721985 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.825257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.825309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.825320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.825340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.825351 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.939521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.939585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.939628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.939651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:31 crc kubenswrapper[4853]: I1209 16:57:31.939670 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:31Z","lastTransitionTime":"2025-12-09T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.042168 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.042239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.042258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.042284 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.042302 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.145078 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.145150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.145167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.145192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.145209 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.248470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.248517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.248565 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.248585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.248619 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.351440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.351483 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.351492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.351525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.351534 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.454245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.454324 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.454348 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.454378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.454399 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.557322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.557378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.557396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.557419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.557436 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.566524 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:32 crc kubenswrapper[4853]: E1209 16:57:32.566675 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.659774 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.659807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.659817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.659835 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.659846 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.762223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.762257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.762265 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.762277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.762286 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.865153 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.865228 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.865246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.865270 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.865289 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.968289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.968346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.968365 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.968381 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:32 crc kubenswrapper[4853]: I1209 16:57:32.968393 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:32Z","lastTransitionTime":"2025-12-09T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.003658 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/3.log" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.004355 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/2.log" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.007089 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b" exitCode=1 Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.007142 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.007189 4853 scope.go:117] "RemoveContainer" containerID="5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.007819 4853 scope.go:117] "RemoveContainer" containerID="c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b" Dec 09 16:57:33 crc kubenswrapper[4853]: E1209 16:57:33.007978 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.022549 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.044325 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.058663 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.070881 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.070914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.070927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.070882 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.070944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.071075 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.085632 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.094866 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.108143 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.124832 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.137194 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:23Z\\\",\\\"message\\\":\\\"2025-12-09T16:56:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f\\\\n2025-12-09T16:56:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f to /host/opt/cni/bin/\\\\n2025-12-09T16:56:38Z [verbose] multus-daemon started\\\\n2025-12-09T16:56:38Z [verbose] Readiness Indicator file check\\\\n2025-12-09T16:57:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.158008 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.173414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.173605 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.173668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.173832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.173905 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.183699 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.195642 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.215436 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:32Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"4c1be812-05d3-4f45-91b5-a853a5c8de71\\\\\\\", Protocol:\\\\\\\"udp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.239547 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.251410 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.262817 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30714c24-9b34-423c-843b-c3a5c744a7ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a099bf0e0e1e9d623a1334d0923cffae6fe94b736206abeedb078922448ca86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.275863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.275913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.275929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.275949 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.275965 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.278318 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.292045 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.302679 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.378576 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.378671 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.378689 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.378712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.378728 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.481253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.481300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.481316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.481337 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.481352 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.566460 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.566532 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.566476 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:33 crc kubenswrapper[4853]: E1209 16:57:33.566618 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:33 crc kubenswrapper[4853]: E1209 16:57:33.566697 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:33 crc kubenswrapper[4853]: E1209 16:57:33.566778 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.583177 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.583214 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.583225 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.583239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.583250 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.585515 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.603761 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:32Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"4c1be812-05d3-4f45-91b5-a853a5c8de71\\\\\\\", Protocol:\\\\\\\"udp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.613531 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.622579 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.631407 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30714c24-9b34-423c-843b-c3a5c744a7ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a099bf0e0e1e9d623a1334d0923cffae6fe94b736206abeedb078922448ca86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.645877 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.661257 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.671231 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.681945 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.685176 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.685199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.685207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.685219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.685228 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.697892 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.709480 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.727127 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.739959 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.748552 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.759887 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.778423 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.788189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.788237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.788251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.788270 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.788288 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.793177 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:23Z\\\",\\\"message\\\":\\\"2025-12-09T16:56:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f\\\\n2025-12-09T16:56:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f to /host/opt/cni/bin/\\\\n2025-12-09T16:56:38Z [verbose] multus-daemon started\\\\n2025-12-09T16:56:38Z [verbose] Readiness Indicator file check\\\\n2025-12-09T16:57:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.808139 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.827826 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:33Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.890026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.890058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.890070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.890090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.890102 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.992733 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.992784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.992795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.992812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:33 crc kubenswrapper[4853]: I1209 16:57:33.992824 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:33Z","lastTransitionTime":"2025-12-09T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.011499 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/3.log" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.095461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.095550 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.095568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.095590 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.095634 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:34Z","lastTransitionTime":"2025-12-09T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.197518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.197555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.197564 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.197578 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.197587 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:34Z","lastTransitionTime":"2025-12-09T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.299346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.299386 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.299396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.299411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.299422 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:34Z","lastTransitionTime":"2025-12-09T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.402443 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.402485 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.402494 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.402509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.402518 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:34Z","lastTransitionTime":"2025-12-09T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.505188 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.505258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.505283 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.505314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.505339 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:34Z","lastTransitionTime":"2025-12-09T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.566812 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:34 crc kubenswrapper[4853]: E1209 16:57:34.567148 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.608283 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.608326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.608338 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.608353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.608365 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:34Z","lastTransitionTime":"2025-12-09T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.716924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.717001 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.717019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.717040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.717055 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:34Z","lastTransitionTime":"2025-12-09T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.819666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.819711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.819723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.819748 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.819760 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:34Z","lastTransitionTime":"2025-12-09T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.922364 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.922413 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.922424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.922441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:34 crc kubenswrapper[4853]: I1209 16:57:34.922452 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:34Z","lastTransitionTime":"2025-12-09T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.025427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.025471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.025480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.025494 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.025504 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.128246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.128292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.128303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.128320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.128333 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.230629 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.230674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.230683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.230697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.230707 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.333410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.333474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.333491 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.333514 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.333535 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.409340 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.409503 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:39.409480835 +0000 UTC m=+146.344220017 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.409590 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.409677 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.409723 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:58:39.409713182 +0000 UTC m=+146.344452364 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.409713 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.409844 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.409901 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 16:58:39.409883297 +0000 UTC m=+146.344622509 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.436239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.436294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.436304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.436317 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.436325 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.510908 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.511367 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.511204 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.511707 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.511729 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.511801 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 16:58:39.511777835 +0000 UTC m=+146.446517047 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.511503 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.512032 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.512047 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.512090 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 16:58:39.512074864 +0000 UTC m=+146.446814076 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.539504 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.539529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.539537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.539553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.539573 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.566351 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.566428 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.566468 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.566635 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.566629 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:35 crc kubenswrapper[4853]: E1209 16:57:35.566786 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.641709 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.641749 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.641762 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.641777 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.641788 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.744207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.744244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.744255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.744271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.744283 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.847720 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.847804 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.847829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.847864 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.847889 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.951376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.951428 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.951440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.951455 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.951466 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.982141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.982218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.982237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.982268 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:35 crc kubenswrapper[4853]: I1209 16:57:35.982294 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:35Z","lastTransitionTime":"2025-12-09T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: E1209 16:57:36.004517 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.009713 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.009771 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.009782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.009802 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.009816 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: E1209 16:57:36.026366 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.030330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.030373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.030382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.030415 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.030425 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: E1209 16:57:36.044461 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.048566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.048659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.048678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.048701 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.048716 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: E1209 16:57:36.064304 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.067978 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.068023 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.068035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.068090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.068105 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: E1209 16:57:36.082483 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5d669b96-627f-4105-ba3d-ff7569a6f697\\\",\\\"systemUUID\\\":\\\"66dfaf11-4892-4e38-8caa-0f87e61cbeaf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:36Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:36 crc kubenswrapper[4853]: E1209 16:57:36.082624 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.084028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.084052 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.084062 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.084077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.084089 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.186545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.186578 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.186592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.186625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.186636 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.288934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.288987 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.288998 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.289012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.289021 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.394152 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.394215 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.394228 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.394376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.394410 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.497557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.497611 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.497624 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.497640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.497651 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.566285 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:36 crc kubenswrapper[4853]: E1209 16:57:36.566487 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.599823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.599857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.599870 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.599885 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.599897 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.702411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.702526 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.702545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.702571 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.702660 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.805187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.805253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.805271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.805293 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.805311 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.907355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.907409 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.907421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.907440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:36 crc kubenswrapper[4853]: I1209 16:57:36.907453 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:36Z","lastTransitionTime":"2025-12-09T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.010154 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.010201 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.010212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.010230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.010243 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.113145 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.113208 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.113218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.113237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.113248 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.215676 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.215723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.215735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.215755 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.215769 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.318263 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.318303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.318314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.318329 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.318340 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.420952 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.421014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.421026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.421042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.421079 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.523805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.523851 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.523863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.523881 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.523892 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.566512 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:37 crc kubenswrapper[4853]: E1209 16:57:37.566690 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.566512 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.566536 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:37 crc kubenswrapper[4853]: E1209 16:57:37.567001 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:37 crc kubenswrapper[4853]: E1209 16:57:37.567071 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.626274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.626318 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.626330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.626346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.626385 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.729820 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.729885 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.729902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.729925 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.729944 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.832735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.832760 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.832767 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.832781 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.832789 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.936011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.936085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.936103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.936132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:37 crc kubenswrapper[4853]: I1209 16:57:37.936151 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:37Z","lastTransitionTime":"2025-12-09T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.039273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.039335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.039350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.039371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.039386 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.141992 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.142064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.142082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.142108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.142127 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.244752 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.244787 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.244797 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.244812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.244822 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.348236 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.348296 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.348306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.348323 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.348334 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.451395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.451436 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.451446 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.451484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.451494 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.553939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.553985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.554000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.554015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.554026 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.566448 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:38 crc kubenswrapper[4853]: E1209 16:57:38.566701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.656438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.656502 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.656512 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.656525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.656538 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.759806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.759858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.759872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.759889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.759905 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.862380 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.862416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.862424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.862439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.862448 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.965875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.965945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.965968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.965996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:38 crc kubenswrapper[4853]: I1209 16:57:38.966017 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:38Z","lastTransitionTime":"2025-12-09T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.068958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.068996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.069005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.069018 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.069027 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.173858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.173911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.173923 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.173942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.173958 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.277656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.277710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.277728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.277746 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.277758 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.379953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.380001 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.380011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.380028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.380037 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.482356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.482398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.482408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.482424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.482436 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.566256 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:39 crc kubenswrapper[4853]: E1209 16:57:39.566404 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.566275 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.566290 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:39 crc kubenswrapper[4853]: E1209 16:57:39.566486 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:39 crc kubenswrapper[4853]: E1209 16:57:39.566542 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.585479 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.585527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.585539 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.585555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.585566 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.687573 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.687638 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.687651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.687675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.687688 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.789735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.789798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.789842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.789875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.789903 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.893905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.893950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.893963 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.894020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.894036 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.996756 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.996791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.996799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.996813 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:39 crc kubenswrapper[4853]: I1209 16:57:39.996823 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:39Z","lastTransitionTime":"2025-12-09T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.099390 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.099449 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.099465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.099484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.099496 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:40Z","lastTransitionTime":"2025-12-09T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.203096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.203174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.203191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.203215 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.203235 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:40Z","lastTransitionTime":"2025-12-09T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.305673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.305714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.305722 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.305735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.305744 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:40Z","lastTransitionTime":"2025-12-09T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.407657 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.407695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.407705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.407721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.407732 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:40Z","lastTransitionTime":"2025-12-09T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.510262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.510332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.510355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.510385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.510405 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:40Z","lastTransitionTime":"2025-12-09T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.566615 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:40 crc kubenswrapper[4853]: E1209 16:57:40.566830 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.613291 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.613350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.613367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.613394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.613411 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:40Z","lastTransitionTime":"2025-12-09T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.717108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.717207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.717247 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.717271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.717287 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:40Z","lastTransitionTime":"2025-12-09T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.820167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.820240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.820251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.820266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.820278 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:40Z","lastTransitionTime":"2025-12-09T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.923027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.923056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.923064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.923077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:40 crc kubenswrapper[4853]: I1209 16:57:40.923085 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:40Z","lastTransitionTime":"2025-12-09T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.026054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.026091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.026102 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.026119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.026130 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.128644 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.128690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.128701 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.128716 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.128727 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.231371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.231433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.231444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.231466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.231478 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.333828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.333880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.333895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.333916 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.333930 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.436376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.436434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.436442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.436458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.436468 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.539515 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.539628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.539650 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.539680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.539701 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.566850 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.566897 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.566875 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:41 crc kubenswrapper[4853]: E1209 16:57:41.567017 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:41 crc kubenswrapper[4853]: E1209 16:57:41.567072 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:41 crc kubenswrapper[4853]: E1209 16:57:41.567147 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.641844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.641893 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.641907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.641930 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.641945 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.744025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.744094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.744108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.744127 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.744143 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.847337 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.847386 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.847399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.847417 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.847432 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.950049 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.950108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.950121 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.950141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:41 crc kubenswrapper[4853]: I1209 16:57:41.950156 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:41Z","lastTransitionTime":"2025-12-09T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.051940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.051988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.051997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.052011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.052022 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.154790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.154842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.154850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.154865 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.154874 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.258089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.258137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.258150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.258168 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.258179 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.360187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.360244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.360254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.360268 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.360280 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.462955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.463000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.463011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.463028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.463040 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.565447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.565525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.565547 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.565578 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.565644 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.566300 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:42 crc kubenswrapper[4853]: E1209 16:57:42.566469 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.668703 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.668770 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.668780 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.668804 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.668818 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.771511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.771571 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.771587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.771640 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.771661 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.874366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.874433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.874449 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.874473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.874491 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.977890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.977942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.977953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.977970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:42 crc kubenswrapper[4853]: I1209 16:57:42.977985 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:42Z","lastTransitionTime":"2025-12-09T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.081183 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.081237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.081252 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.081277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.081294 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:43Z","lastTransitionTime":"2025-12-09T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.185449 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.185505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.185522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.185545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.185562 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:43Z","lastTransitionTime":"2025-12-09T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.288004 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.288040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.288047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.288061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.288070 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:43Z","lastTransitionTime":"2025-12-09T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.390636 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.390740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.390764 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.390792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.390811 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:43Z","lastTransitionTime":"2025-12-09T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.493879 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.493931 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.493947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.493972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.493990 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:43Z","lastTransitionTime":"2025-12-09T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.566519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.566710 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.566541 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:43 crc kubenswrapper[4853]: E1209 16:57:43.566771 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:43 crc kubenswrapper[4853]: E1209 16:57:43.566956 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:43 crc kubenswrapper[4853]: E1209 16:57:43.567330 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.588112 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e036ba1-c8bd-48d7-bd93-71993300b60f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38bd8150dd39ed443092ea1f980938ed447b6fac18a6dd36fa5da20d3c17024d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfpw4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kwsj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.596942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.597007 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.597030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.597060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.597086 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:43Z","lastTransitionTime":"2025-12-09T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.611407 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 16:56:31.122181 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 16:56:31.122344 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 16:56:31.124004 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704776276/tls.crt::/tmp/serving-cert-2704776276/tls.key\\\\\\\"\\\\nI1209 16:56:31.472000 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 16:56:31.474380 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 16:56:31.474399 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 16:56:31.474419 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 16:56:31.474426 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 16:56:31.481822 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 16:56:31.481851 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 16:56:31.481865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 16:56:31.481869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 16:56:31.481873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 16:56:31.481878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 16:56:31.483426 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 16:56:31.485494 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.629395 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2adac171c3aeaca916cf7437783ca8df4086c529043501c69227fa63e223301d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.646464 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.658194 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-svpfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31c5f775-f793-4d43-9503-0070fc5ba186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07109350c2746f6a039f969bad355a90a22666632db9001f334dce40755af5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92pjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-svpfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.675904 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18ca0bf-dc49-4000-97e9-9a64adac54de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5dc8a18e3c6a54ea846d51370bac882c096de9b69d2e7034424edcdac540d435\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"message\\\":\\\"ing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-77995]\\\\nI1209 16:57:05.430154 6517 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 16:57:05.430205 6517 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1209 16:57:05.430187 6517 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 16:57:05.430243 6517 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-77995 before timer (time: 2025-12-09 16:57:06.532712026 +0000 UTC m=+1.620624303): skip\\\\nI1209 16:57:05.430268 6517 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 16:57:05.430253 6517 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 16:57:05.430277 6517 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 100.753µs)\\\\nI1209 16:57:05.430304 6517 factory.go:656] Stopping watch factory\\\\nI1209 16:57:05.430316 6517 ovnkube.go:599] Stopped ovnkube\\\\nI1209 16:57:05.430357 6517 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1209 16:57:05.430360 6517 handler.go:208] Removed *v1.Node event handler 2\\\\nF1209 16:57:05.430483 6517 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:32Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_UDP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"4c1be812-05d3-4f45-91b5-a853a5c8de71\\\\\\\", Protocol:\\\\\\\"udp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fmxt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fzlgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.688155 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22a4b32a-bcd8-400d-956d-6971df0d5c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3434a119981f6c0f55f054f74d56f74671aa59ee31f604c2494a14c42367572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06dd778f715a68137ab39b2aedc96f395bcae16cf3a5306e3ec95cc6f00d0d71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rkrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x2mnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.697546 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-77995" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d55def8-578d-461b-9514-07eea9c62336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q656q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-77995\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.698797 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.698831 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.698841 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.698854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.698864 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:43Z","lastTransitionTime":"2025-12-09T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.709123 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30714c24-9b34-423c-843b-c3a5c744a7ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a099bf0e0e1e9d623a1334d0923cffae6fe94b736206abeedb078922448ca86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3cf506df1a6f1cfccde9c3c7bdce314ad09f0be59924939b059f24d8633986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.725072 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c543c269-b5f7-4705-86b7-dc46575c92c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90de6876aef05c74f9a5b78d64b6a071615b1898ea90505b11aa2ccf1ff0cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9916abe6cc8b52018657ef9b5ec6bb0c8ef3bedefa572e6addbe17daad0775f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c32865c8941684e802aa22d99345fe3d35478015a5f4e66865f64868c94cfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.740850 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16e70678-1a24-4ab7-b15f-0e65a17a4a24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62d47490ec331037894d2d49103256711206825456fa0c3315590795225b7e8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06d8d24d1daddef5fdde29228bc58c6063eff198e711ea21529e53b6c2c54b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd38844d691110403e5369f0a0f070b4684a35af8bed376b000598cc705c1869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c592f359fd9e8127c891694a06743207231831da32aaac0f92bf769b6d960ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.753055 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18cd40a7859392834dc5b8ea4a48bb8aa4da85d32028cefa8ac93b94757ad32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.768941 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.801465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.801498 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.801510 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.801529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.801542 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:43Z","lastTransitionTime":"2025-12-09T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.811491 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e0e3242-2c4c-4082-89ad-aff62ff6da52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://060a17d1644d15693a11d0654281dce2850a1b0466a7854701754e5c29443ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dace4b1b0b209c6583c950b6d55fdae19a8ebed780cfa59303e164cce82d720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c1dbcbc3c7bb05130ded4eedbcd30dbefe09f9ab84dd5eb4ecfae018d7a337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8925d987bfebfd96949a7b02f0954380e5de3c9d0983f7f4b28a3485604ff811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d689624e086e752b210716a2e90e7dca6089ace1541d633f06f422116543fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349c22c6f880d777ea7eed6c0d6a385a45bebd91ce706a8cd1f8f966c7132071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92dbd5b6d1ad4dfd5691d65b0db26e38ad97483e51113db718e850df6600c2c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598b82bb984ce82edfb005fe4d8c0f0a6a3ed1c9ff09562a8a2348266acc4641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.828181 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5265853f81d29d44b9953eff9a1b38a6a3774c587a7f924262c3c52557943ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98c2a893af062e76e5d34c956cd3c1ca1097557465033d9efa297c4a0b9f8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.841335 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fmrzg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b02f072-d8cc-4c46-8159-fe99d19b24a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:57:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T16:57:23Z\\\",\\\"message\\\":\\\"2025-12-09T16:56:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f\\\\n2025-12-09T16:56:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8f1ac976-3e15-4a8d-a444-a98853c45a6f to /host/opt/cni/bin/\\\\n2025-12-09T16:56:38Z [verbose] multus-daemon started\\\\n2025-12-09T16:56:38Z [verbose] Readiness Indicator file check\\\\n2025-12-09T16:57:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fmrzg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.856220 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qzngg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5faabd8c-2204-4f29-9961-392416e98677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d052d9b7c42f749f9e7c6f4b4ff39d732d857491307670b17230fb63db7dc40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6347ef2f064abd0737ee20d644a8243f4d69f38f6bf52041192d889f3e80bd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9399a28b36da826863110b17097f3a9c3d56131a4e5bd71870e19056438bc88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1618a0d98af1983a00476f036330f49256ae7f317f227a068ca754b9f5743caa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb3069cca47f8677033a647d7e4af4876ce51d0d0308610ba85d976a921ac51e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42a230741cceaf1cae48aea52b6f61cb86d02611b97ddc51645cf391d17a3561\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65d72873453ac1a0869bdafc93adcc3d2d5784950cd0dbdbe385d24796ad6501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T16:56:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T16:56:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jm8dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qzngg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.867471 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tw8jq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac35c63b-ab53-469b-99f8-f2a354be323d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5656c90ed77b415a1b31284bf4e75d7eaa3523bdea07774f069e81172a2261f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T16:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppb25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T16:56:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tw8jq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.882182 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T16:56:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T16:57:43Z is after 2025-08-24T17:21:41Z" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.904152 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.904201 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.904217 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.904241 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:43 crc kubenswrapper[4853]: I1209 16:57:43.904257 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:43Z","lastTransitionTime":"2025-12-09T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.006836 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.006901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.006920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.006943 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.006967 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.109959 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.110030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.110052 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.110083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.110108 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.212828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.212870 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.212881 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.212898 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.212908 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.316218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.316275 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.316291 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.316316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.316333 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.419849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.419916 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.419936 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.419962 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.419979 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.522496 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.522550 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.522558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.522575 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.522585 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.567236 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:44 crc kubenswrapper[4853]: E1209 16:57:44.567724 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.625946 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.626011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.626034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.626063 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.626085 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.729029 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.729095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.729106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.729127 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.729139 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.832111 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.832186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.832207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.832236 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.832259 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.934996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.935053 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.935070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.935091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:44 crc kubenswrapper[4853]: I1209 16:57:44.935108 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:44Z","lastTransitionTime":"2025-12-09T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.038383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.038453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.038486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.038518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.038540 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.141165 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.141237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.141255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.141280 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.141299 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.243779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.243959 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.244637 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.244691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.244724 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.351672 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.351754 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.351782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.351808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.351860 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.454423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.454466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.454475 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.454490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.454499 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.556895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.556955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.556969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.556989 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.557000 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.566610 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.566676 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.566739 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:45 crc kubenswrapper[4853]: E1209 16:57:45.566763 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:45 crc kubenswrapper[4853]: E1209 16:57:45.566859 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:45 crc kubenswrapper[4853]: E1209 16:57:45.567073 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.659799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.659869 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.659892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.659921 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.659943 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.762890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.762955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.762973 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.762997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.763014 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.866754 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.866852 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.866875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.866913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.866939 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.970453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.970529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.970561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.970591 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:45 crc kubenswrapper[4853]: I1209 16:57:45.970663 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:45Z","lastTransitionTime":"2025-12-09T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.073475 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.073553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.073578 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.073642 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.073667 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:46Z","lastTransitionTime":"2025-12-09T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.177524 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.177647 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.177674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.177714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.177738 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:46Z","lastTransitionTime":"2025-12-09T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.250006 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.250057 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.250070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.250084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.250094 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T16:57:46Z","lastTransitionTime":"2025-12-09T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.309574 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px"] Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.310089 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.312358 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.313842 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.314039 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.314547 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.350335 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fmrzg" podStartSLOduration=70.350313162 podStartE2EDuration="1m10.350313162s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.350209548 +0000 UTC m=+93.284948730" watchObservedRunningTime="2025-12-09 16:57:46.350313162 +0000 UTC m=+93.285052354" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.376333 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qzngg" podStartSLOduration=70.376316767 podStartE2EDuration="1m10.376316767s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.376273236 +0000 UTC m=+93.311012428" watchObservedRunningTime="2025-12-09 16:57:46.376316767 +0000 UTC m=+93.311055949" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.389468 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tw8jq" podStartSLOduration=71.389443204 podStartE2EDuration="1m11.389443204s" podCreationTimestamp="2025-12-09 16:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.389224326 +0000 UTC m=+93.323963508" watchObservedRunningTime="2025-12-09 16:57:46.389443204 +0000 UTC m=+93.324182406" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.432898 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podStartSLOduration=70.432876001 podStartE2EDuration="1m10.432876001s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.416240651 +0000 UTC m=+93.350979883" watchObservedRunningTime="2025-12-09 16:57:46.432876001 +0000 UTC m=+93.367615203" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.441116 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10144c44-c38a-4961-b733-3f37b9db9646-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.441270 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/10144c44-c38a-4961-b733-3f37b9db9646-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.441340 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/10144c44-c38a-4961-b733-3f37b9db9646-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.441382 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10144c44-c38a-4961-b733-3f37b9db9646-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.441438 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10144c44-c38a-4961-b733-3f37b9db9646-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.447079 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=75.447061118 podStartE2EDuration="1m15.447061118s" podCreationTimestamp="2025-12-09 16:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.432777968 +0000 UTC m=+93.367517160" watchObservedRunningTime="2025-12-09 16:57:46.447061118 +0000 UTC m=+93.381800300" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.484744 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-svpfq" podStartSLOduration=71.484723857 podStartE2EDuration="1m11.484723857s" podCreationTimestamp="2025-12-09 16:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.483926993 +0000 UTC m=+93.418666175" watchObservedRunningTime="2025-12-09 16:57:46.484723857 +0000 UTC m=+93.419463049" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.535557 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x2mnh" podStartSLOduration=70.535536831 podStartE2EDuration="1m10.535536831s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.524759494 +0000 UTC m=+93.459498696" watchObservedRunningTime="2025-12-09 16:57:46.535536831 +0000 UTC m=+93.470276023" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.542447 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10144c44-c38a-4961-b733-3f37b9db9646-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.542505 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10144c44-c38a-4961-b733-3f37b9db9646-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.542546 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/10144c44-c38a-4961-b733-3f37b9db9646-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.542577 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/10144c44-c38a-4961-b733-3f37b9db9646-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.542624 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10144c44-c38a-4961-b733-3f37b9db9646-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.542670 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/10144c44-c38a-4961-b733-3f37b9db9646-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.542767 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/10144c44-c38a-4961-b733-3f37b9db9646-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.543636 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/10144c44-c38a-4961-b733-3f37b9db9646-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.546998 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.546976408 podStartE2EDuration="25.546976408s" podCreationTimestamp="2025-12-09 16:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.546098662 +0000 UTC m=+93.480837874" watchObservedRunningTime="2025-12-09 16:57:46.546976408 +0000 UTC m=+93.481715620" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.548568 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10144c44-c38a-4961-b733-3f37b9db9646-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.564625 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10144c44-c38a-4961-b733-3f37b9db9646-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x75px\" (UID: \"10144c44-c38a-4961-b733-3f37b9db9646\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.566074 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:46 crc kubenswrapper[4853]: E1209 16:57:46.566219 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.572456 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.572439107 podStartE2EDuration="1m8.572439107s" podCreationTimestamp="2025-12-09 16:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.571659534 +0000 UTC m=+93.506398716" watchObservedRunningTime="2025-12-09 16:57:46.572439107 +0000 UTC m=+93.507178299" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.601671 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.601652456 podStartE2EDuration="41.601652456s" podCreationTimestamp="2025-12-09 16:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.590305722 +0000 UTC m=+93.525044924" watchObservedRunningTime="2025-12-09 16:57:46.601652456 +0000 UTC m=+93.536391638" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.625562 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" Dec 09 16:57:46 crc kubenswrapper[4853]: I1209 16:57:46.653018 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=73.652999157 podStartE2EDuration="1m13.652999157s" podCreationTimestamp="2025-12-09 16:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:46.651077961 +0000 UTC m=+93.585817143" watchObservedRunningTime="2025-12-09 16:57:46.652999157 +0000 UTC m=+93.587738339" Dec 09 16:57:47 crc kubenswrapper[4853]: I1209 16:57:47.057461 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" event={"ID":"10144c44-c38a-4961-b733-3f37b9db9646","Type":"ContainerStarted","Data":"9cfcad9a32bc8d1cb1d463dd83e9e3b3b6ae29e4c7d3ab3e6a535a0ca193e56b"} Dec 09 16:57:47 crc kubenswrapper[4853]: I1209 16:57:47.057506 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" event={"ID":"10144c44-c38a-4961-b733-3f37b9db9646","Type":"ContainerStarted","Data":"86c1afdb5f30297557bdeb7a315e4a2e1f475e82ba926dfaa4d1908bb3f036c6"} Dec 09 16:57:47 crc kubenswrapper[4853]: I1209 16:57:47.071630 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x75px" podStartSLOduration=71.071608462 podStartE2EDuration="1m11.071608462s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:57:47.071350865 +0000 UTC m=+94.006090047" watchObservedRunningTime="2025-12-09 16:57:47.071608462 +0000 UTC m=+94.006347644" Dec 09 16:57:47 crc kubenswrapper[4853]: I1209 16:57:47.567308 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:47 crc kubenswrapper[4853]: I1209 16:57:47.567424 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:47 crc kubenswrapper[4853]: E1209 16:57:47.567620 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:47 crc kubenswrapper[4853]: I1209 16:57:47.568027 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:47 crc kubenswrapper[4853]: E1209 16:57:47.568161 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:47 crc kubenswrapper[4853]: E1209 16:57:47.568810 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:47 crc kubenswrapper[4853]: I1209 16:57:47.569213 4853 scope.go:117] "RemoveContainer" containerID="c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b" Dec 09 16:57:47 crc kubenswrapper[4853]: E1209 16:57:47.569504 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" Dec 09 16:57:48 crc kubenswrapper[4853]: I1209 16:57:48.566370 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:48 crc kubenswrapper[4853]: E1209 16:57:48.566690 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:49 crc kubenswrapper[4853]: I1209 16:57:49.566562 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:49 crc kubenswrapper[4853]: E1209 16:57:49.566726 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:49 crc kubenswrapper[4853]: I1209 16:57:49.566833 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:49 crc kubenswrapper[4853]: I1209 16:57:49.566861 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:49 crc kubenswrapper[4853]: E1209 16:57:49.566908 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:49 crc kubenswrapper[4853]: E1209 16:57:49.567132 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:50 crc kubenswrapper[4853]: I1209 16:57:50.566788 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:50 crc kubenswrapper[4853]: E1209 16:57:50.566952 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:51 crc kubenswrapper[4853]: I1209 16:57:51.566433 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:51 crc kubenswrapper[4853]: I1209 16:57:51.566474 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:51 crc kubenswrapper[4853]: I1209 16:57:51.566455 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:51 crc kubenswrapper[4853]: E1209 16:57:51.566618 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:51 crc kubenswrapper[4853]: E1209 16:57:51.567023 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:51 crc kubenswrapper[4853]: E1209 16:57:51.567245 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:52 crc kubenswrapper[4853]: I1209 16:57:52.566703 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:52 crc kubenswrapper[4853]: E1209 16:57:52.566816 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:53 crc kubenswrapper[4853]: I1209 16:57:53.567305 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:53 crc kubenswrapper[4853]: I1209 16:57:53.567382 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:53 crc kubenswrapper[4853]: E1209 16:57:53.569226 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:53 crc kubenswrapper[4853]: I1209 16:57:53.569309 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:53 crc kubenswrapper[4853]: E1209 16:57:53.569537 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:53 crc kubenswrapper[4853]: E1209 16:57:53.569574 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:54 crc kubenswrapper[4853]: I1209 16:57:54.330152 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:54 crc kubenswrapper[4853]: E1209 16:57:54.330366 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:57:54 crc kubenswrapper[4853]: E1209 16:57:54.330435 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs podName:7d55def8-578d-461b-9514-07eea9c62336 nodeName:}" failed. No retries permitted until 2025-12-09 16:58:58.330416246 +0000 UTC m=+165.265155428 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs") pod "network-metrics-daemon-77995" (UID: "7d55def8-578d-461b-9514-07eea9c62336") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 16:57:54 crc kubenswrapper[4853]: I1209 16:57:54.566276 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:54 crc kubenswrapper[4853]: E1209 16:57:54.566400 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:55 crc kubenswrapper[4853]: I1209 16:57:55.566125 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:55 crc kubenswrapper[4853]: I1209 16:57:55.566281 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:55 crc kubenswrapper[4853]: I1209 16:57:55.566140 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:55 crc kubenswrapper[4853]: E1209 16:57:55.566402 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:55 crc kubenswrapper[4853]: E1209 16:57:55.566465 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:55 crc kubenswrapper[4853]: E1209 16:57:55.566535 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:56 crc kubenswrapper[4853]: I1209 16:57:56.566871 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:56 crc kubenswrapper[4853]: E1209 16:57:56.567165 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:57 crc kubenswrapper[4853]: I1209 16:57:57.566152 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:57 crc kubenswrapper[4853]: I1209 16:57:57.566201 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:57 crc kubenswrapper[4853]: I1209 16:57:57.566201 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:57 crc kubenswrapper[4853]: E1209 16:57:57.566353 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:57 crc kubenswrapper[4853]: E1209 16:57:57.566419 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:57 crc kubenswrapper[4853]: E1209 16:57:57.566569 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:57:58 crc kubenswrapper[4853]: I1209 16:57:58.572035 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:57:58 crc kubenswrapper[4853]: E1209 16:57:58.572322 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:57:59 crc kubenswrapper[4853]: I1209 16:57:59.566583 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:57:59 crc kubenswrapper[4853]: I1209 16:57:59.566659 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:57:59 crc kubenswrapper[4853]: E1209 16:57:59.566770 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:57:59 crc kubenswrapper[4853]: I1209 16:57:59.566841 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:57:59 crc kubenswrapper[4853]: E1209 16:57:59.566992 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:57:59 crc kubenswrapper[4853]: E1209 16:57:59.567150 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:00 crc kubenswrapper[4853]: I1209 16:58:00.566681 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:00 crc kubenswrapper[4853]: E1209 16:58:00.566849 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:01 crc kubenswrapper[4853]: I1209 16:58:01.566871 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:01 crc kubenswrapper[4853]: E1209 16:58:01.567161 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:01 crc kubenswrapper[4853]: I1209 16:58:01.567201 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:01 crc kubenswrapper[4853]: I1209 16:58:01.567266 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:01 crc kubenswrapper[4853]: E1209 16:58:01.567368 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:01 crc kubenswrapper[4853]: E1209 16:58:01.568483 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:01 crc kubenswrapper[4853]: I1209 16:58:01.570440 4853 scope.go:117] "RemoveContainer" containerID="c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b" Dec 09 16:58:01 crc kubenswrapper[4853]: E1209 16:58:01.570934 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fzlgt_openshift-ovn-kubernetes(f18ca0bf-dc49-4000-97e9-9a64adac54de)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" Dec 09 16:58:02 crc kubenswrapper[4853]: I1209 16:58:02.567049 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:02 crc kubenswrapper[4853]: E1209 16:58:02.567234 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:03 crc kubenswrapper[4853]: I1209 16:58:03.566463 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:03 crc kubenswrapper[4853]: I1209 16:58:03.566770 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:03 crc kubenswrapper[4853]: I1209 16:58:03.566463 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:03 crc kubenswrapper[4853]: E1209 16:58:03.566844 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:03 crc kubenswrapper[4853]: E1209 16:58:03.568335 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:03 crc kubenswrapper[4853]: E1209 16:58:03.568425 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:04 crc kubenswrapper[4853]: I1209 16:58:04.566385 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:04 crc kubenswrapper[4853]: E1209 16:58:04.566549 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:05 crc kubenswrapper[4853]: I1209 16:58:05.566930 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:05 crc kubenswrapper[4853]: I1209 16:58:05.566997 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:05 crc kubenswrapper[4853]: E1209 16:58:05.567047 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:05 crc kubenswrapper[4853]: E1209 16:58:05.567115 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:05 crc kubenswrapper[4853]: I1209 16:58:05.566941 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:05 crc kubenswrapper[4853]: E1209 16:58:05.567215 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:06 crc kubenswrapper[4853]: I1209 16:58:06.566542 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:06 crc kubenswrapper[4853]: E1209 16:58:06.566769 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:07 crc kubenswrapper[4853]: I1209 16:58:07.566442 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:07 crc kubenswrapper[4853]: I1209 16:58:07.567128 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:07 crc kubenswrapper[4853]: I1209 16:58:07.566637 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:07 crc kubenswrapper[4853]: E1209 16:58:07.567328 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:07 crc kubenswrapper[4853]: E1209 16:58:07.567707 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:07 crc kubenswrapper[4853]: E1209 16:58:07.567765 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:08 crc kubenswrapper[4853]: I1209 16:58:08.566665 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:08 crc kubenswrapper[4853]: E1209 16:58:08.567028 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:09 crc kubenswrapper[4853]: I1209 16:58:09.566815 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:09 crc kubenswrapper[4853]: I1209 16:58:09.566848 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:09 crc kubenswrapper[4853]: E1209 16:58:09.567050 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:09 crc kubenswrapper[4853]: I1209 16:58:09.566848 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:09 crc kubenswrapper[4853]: E1209 16:58:09.567136 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:09 crc kubenswrapper[4853]: E1209 16:58:09.567281 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:10 crc kubenswrapper[4853]: I1209 16:58:10.130425 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmrzg_8b02f072-d8cc-4c46-8159-fe99d19b24a6/kube-multus/1.log" Dec 09 16:58:10 crc kubenswrapper[4853]: I1209 16:58:10.131017 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmrzg_8b02f072-d8cc-4c46-8159-fe99d19b24a6/kube-multus/0.log" Dec 09 16:58:10 crc kubenswrapper[4853]: I1209 16:58:10.131065 4853 generic.go:334] "Generic (PLEG): container finished" podID="8b02f072-d8cc-4c46-8159-fe99d19b24a6" containerID="3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384" exitCode=1 Dec 09 16:58:10 crc kubenswrapper[4853]: I1209 16:58:10.131116 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmrzg" event={"ID":"8b02f072-d8cc-4c46-8159-fe99d19b24a6","Type":"ContainerDied","Data":"3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384"} Dec 09 16:58:10 crc kubenswrapper[4853]: I1209 16:58:10.131172 4853 scope.go:117] "RemoveContainer" containerID="9e7c68b688d25ece0cb0ed361f69666e0bb92f4270752adce37f857e50e2d6fc" Dec 09 16:58:10 crc kubenswrapper[4853]: I1209 16:58:10.131718 4853 scope.go:117] "RemoveContainer" containerID="3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384" Dec 09 16:58:10 crc kubenswrapper[4853]: E1209 16:58:10.131993 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fmrzg_openshift-multus(8b02f072-d8cc-4c46-8159-fe99d19b24a6)\"" pod="openshift-multus/multus-fmrzg" podUID="8b02f072-d8cc-4c46-8159-fe99d19b24a6" Dec 09 16:58:10 crc kubenswrapper[4853]: I1209 16:58:10.567143 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:10 crc kubenswrapper[4853]: E1209 16:58:10.567308 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:11 crc kubenswrapper[4853]: I1209 16:58:11.136871 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmrzg_8b02f072-d8cc-4c46-8159-fe99d19b24a6/kube-multus/1.log" Dec 09 16:58:11 crc kubenswrapper[4853]: I1209 16:58:11.567187 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:11 crc kubenswrapper[4853]: I1209 16:58:11.567298 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:11 crc kubenswrapper[4853]: I1209 16:58:11.567187 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:11 crc kubenswrapper[4853]: E1209 16:58:11.567437 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:11 crc kubenswrapper[4853]: E1209 16:58:11.567564 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:11 crc kubenswrapper[4853]: E1209 16:58:11.567733 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:12 crc kubenswrapper[4853]: I1209 16:58:12.566254 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:12 crc kubenswrapper[4853]: E1209 16:58:12.566510 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:13 crc kubenswrapper[4853]: E1209 16:58:13.547436 4853 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 09 16:58:13 crc kubenswrapper[4853]: I1209 16:58:13.567083 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:13 crc kubenswrapper[4853]: I1209 16:58:13.567213 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:13 crc kubenswrapper[4853]: I1209 16:58:13.567231 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:13 crc kubenswrapper[4853]: E1209 16:58:13.569452 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:13 crc kubenswrapper[4853]: E1209 16:58:13.569879 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:13 crc kubenswrapper[4853]: E1209 16:58:13.570492 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:13 crc kubenswrapper[4853]: I1209 16:58:13.571115 4853 scope.go:117] "RemoveContainer" containerID="c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b" Dec 09 16:58:13 crc kubenswrapper[4853]: E1209 16:58:13.657748 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 16:58:14 crc kubenswrapper[4853]: I1209 16:58:14.165246 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/3.log" Dec 09 16:58:14 crc kubenswrapper[4853]: I1209 16:58:14.167519 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerStarted","Data":"507980d98ddb2b0da1d57c39f0786848bad044537478316a247f8a4f48fdcdc5"} Dec 09 16:58:14 crc kubenswrapper[4853]: I1209 16:58:14.168332 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:58:14 crc kubenswrapper[4853]: I1209 16:58:14.194412 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podStartSLOduration=98.194386981 podStartE2EDuration="1m38.194386981s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:14.192066253 +0000 UTC m=+121.126805435" watchObservedRunningTime="2025-12-09 16:58:14.194386981 +0000 UTC m=+121.129126163" Dec 09 16:58:14 crc kubenswrapper[4853]: I1209 16:58:14.566487 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:14 crc kubenswrapper[4853]: E1209 16:58:14.566730 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:15 crc kubenswrapper[4853]: I1209 16:58:15.001738 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-77995"] Dec 09 16:58:15 crc kubenswrapper[4853]: I1209 16:58:15.171971 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:15 crc kubenswrapper[4853]: E1209 16:58:15.172379 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:15 crc kubenswrapper[4853]: I1209 16:58:15.568024 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:15 crc kubenswrapper[4853]: E1209 16:58:15.568219 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:15 crc kubenswrapper[4853]: I1209 16:58:15.568484 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:15 crc kubenswrapper[4853]: E1209 16:58:15.568579 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:15 crc kubenswrapper[4853]: I1209 16:58:15.569882 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:15 crc kubenswrapper[4853]: E1209 16:58:15.570026 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:16 crc kubenswrapper[4853]: I1209 16:58:16.566524 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:16 crc kubenswrapper[4853]: E1209 16:58:16.566719 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:17 crc kubenswrapper[4853]: I1209 16:58:17.566570 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:17 crc kubenswrapper[4853]: I1209 16:58:17.566570 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:17 crc kubenswrapper[4853]: E1209 16:58:17.566800 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:17 crc kubenswrapper[4853]: I1209 16:58:17.566530 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:17 crc kubenswrapper[4853]: E1209 16:58:17.567018 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:17 crc kubenswrapper[4853]: E1209 16:58:17.567155 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:18 crc kubenswrapper[4853]: I1209 16:58:18.566800 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:18 crc kubenswrapper[4853]: E1209 16:58:18.567066 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:18 crc kubenswrapper[4853]: E1209 16:58:18.659532 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 16:58:19 crc kubenswrapper[4853]: I1209 16:58:19.566385 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:19 crc kubenswrapper[4853]: I1209 16:58:19.566396 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:19 crc kubenswrapper[4853]: E1209 16:58:19.566561 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:19 crc kubenswrapper[4853]: I1209 16:58:19.566860 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:19 crc kubenswrapper[4853]: E1209 16:58:19.567015 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:19 crc kubenswrapper[4853]: E1209 16:58:19.566879 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:20 crc kubenswrapper[4853]: I1209 16:58:20.567044 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:20 crc kubenswrapper[4853]: E1209 16:58:20.567810 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:21 crc kubenswrapper[4853]: I1209 16:58:21.566430 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:21 crc kubenswrapper[4853]: I1209 16:58:21.566509 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:21 crc kubenswrapper[4853]: E1209 16:58:21.566636 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:21 crc kubenswrapper[4853]: I1209 16:58:21.566653 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:21 crc kubenswrapper[4853]: E1209 16:58:21.566914 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:21 crc kubenswrapper[4853]: E1209 16:58:21.566966 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:22 crc kubenswrapper[4853]: I1209 16:58:22.566859 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:22 crc kubenswrapper[4853]: I1209 16:58:22.567530 4853 scope.go:117] "RemoveContainer" containerID="3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384" Dec 09 16:58:22 crc kubenswrapper[4853]: E1209 16:58:22.567535 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:23 crc kubenswrapper[4853]: I1209 16:58:23.213096 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmrzg_8b02f072-d8cc-4c46-8159-fe99d19b24a6/kube-multus/1.log" Dec 09 16:58:23 crc kubenswrapper[4853]: I1209 16:58:23.213144 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmrzg" event={"ID":"8b02f072-d8cc-4c46-8159-fe99d19b24a6","Type":"ContainerStarted","Data":"7ada34554e8bab61755e7d0175d3ce2d43142a6cca373bc8134e19cf7596691c"} Dec 09 16:58:23 crc kubenswrapper[4853]: I1209 16:58:23.567018 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:23 crc kubenswrapper[4853]: I1209 16:58:23.567104 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:23 crc kubenswrapper[4853]: E1209 16:58:23.569234 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:23 crc kubenswrapper[4853]: I1209 16:58:23.569310 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:23 crc kubenswrapper[4853]: E1209 16:58:23.569459 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:23 crc kubenswrapper[4853]: E1209 16:58:23.569641 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:23 crc kubenswrapper[4853]: E1209 16:58:23.660079 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 16:58:24 crc kubenswrapper[4853]: I1209 16:58:24.567034 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:24 crc kubenswrapper[4853]: E1209 16:58:24.567266 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:25 crc kubenswrapper[4853]: I1209 16:58:25.567188 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:25 crc kubenswrapper[4853]: I1209 16:58:25.567198 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:25 crc kubenswrapper[4853]: I1209 16:58:25.567303 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:25 crc kubenswrapper[4853]: E1209 16:58:25.567490 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:25 crc kubenswrapper[4853]: E1209 16:58:25.567741 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:25 crc kubenswrapper[4853]: E1209 16:58:25.567917 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:26 crc kubenswrapper[4853]: I1209 16:58:26.567174 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:26 crc kubenswrapper[4853]: E1209 16:58:26.567381 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:27 crc kubenswrapper[4853]: I1209 16:58:27.566876 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:27 crc kubenswrapper[4853]: I1209 16:58:27.566910 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:27 crc kubenswrapper[4853]: E1209 16:58:27.567320 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 16:58:27 crc kubenswrapper[4853]: I1209 16:58:27.566963 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:27 crc kubenswrapper[4853]: E1209 16:58:27.567527 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 16:58:27 crc kubenswrapper[4853]: E1209 16:58:27.567575 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 16:58:28 crc kubenswrapper[4853]: I1209 16:58:28.566774 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:28 crc kubenswrapper[4853]: E1209 16:58:28.566978 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-77995" podUID="7d55def8-578d-461b-9514-07eea9c62336" Dec 09 16:58:29 crc kubenswrapper[4853]: I1209 16:58:29.566893 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:29 crc kubenswrapper[4853]: I1209 16:58:29.567208 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:29 crc kubenswrapper[4853]: I1209 16:58:29.566911 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:29 crc kubenswrapper[4853]: I1209 16:58:29.569902 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 16:58:29 crc kubenswrapper[4853]: I1209 16:58:29.570881 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 16:58:29 crc kubenswrapper[4853]: I1209 16:58:29.571117 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 16:58:29 crc kubenswrapper[4853]: I1209 16:58:29.571209 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 16:58:30 crc kubenswrapper[4853]: I1209 16:58:30.566099 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:30 crc kubenswrapper[4853]: I1209 16:58:30.569384 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 16:58:30 crc kubenswrapper[4853]: I1209 16:58:30.570010 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 16:58:36 crc kubenswrapper[4853]: I1209 16:58:36.981538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.035284 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zgd7r"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.035797 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.040233 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ktltc"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.043141 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.048006 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.048500 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.048134 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.048845 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.049005 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.049288 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.052589 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.053290 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.053391 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.065084 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.065144 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.065163 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.065224 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.065103 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.065333 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.065402 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.065462 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.065721 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.066232 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.066404 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.066439 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.066451 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.066572 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.067046 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.067443 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.067841 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.068658 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.070646 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jk2pg"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.071202 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvjb6"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.071536 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.071698 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.072448 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5vb7d"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.072943 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ccm5l"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.073257 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.073434 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.073559 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xp79b"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.073659 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.073821 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ccm5l" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.073963 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.074296 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.074475 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.074618 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.073997 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.073834 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.075076 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.074032 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.074063 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.074094 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.074122 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.074151 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.073701 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.074968 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.075811 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.076421 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.076673 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.076736 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.077173 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.078091 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083118 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083161 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083419 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083477 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083421 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083663 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083716 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083785 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083818 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.083927 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.084047 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.084529 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.084832 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.089168 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.089659 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.089858 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.090086 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.090130 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.090286 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.090503 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.090726 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.090967 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.091271 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.091425 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.091569 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.091661 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.091744 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.091881 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.092013 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.092115 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9blst"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.092407 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.095664 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.096028 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.096519 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.096633 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.096645 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.102308 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.103238 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.103513 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.104001 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.104349 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.104430 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.105478 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.105540 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zgd7r"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.105590 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ktltc"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.106112 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jk2pg"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.106326 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.106996 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.112999 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.113049 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ccm5l"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.113065 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvjb6"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.114672 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2b1f86c-b414-418d-973d-6db3442a1bd1-serving-cert\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.116056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b2b1f86c-b414-418d-973d-6db3442a1bd1-audit-dir\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.116305 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n5jk\" (UniqueName: \"kubernetes.io/projected/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-kube-api-access-8n5jk\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.116567 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-image-import-ca\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.116645 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.116831 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b2b1f86c-b414-418d-973d-6db3442a1bd1-encryption-config\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.117079 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.117856 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.118563 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.118694 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.125139 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.125543 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.129466 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.129855 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130109 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d1b37-5404-44ed-83de-51eb04c1b2c4-config\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130148 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-audit\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130175 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-config\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130192 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a93d1b37-5404-44ed-83de-51eb04c1b2c4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130219 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130233 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-client-ca\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130248 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrhq2\" (UniqueName: \"kubernetes.io/projected/b2b1f86c-b414-418d-973d-6db3442a1bd1-kube-api-access-vrhq2\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130271 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b2b1f86c-b414-418d-973d-6db3442a1bd1-node-pullsecrets\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130292 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130302 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a93d1b37-5404-44ed-83de-51eb04c1b2c4-images\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130325 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2b1f86c-b414-418d-973d-6db3442a1bd1-etcd-client\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130343 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-serving-cert\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130363 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-config\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130388 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56kt\" (UniqueName: \"kubernetes.io/projected/a93d1b37-5404-44ed-83de-51eb04c1b2c4-kube-api-access-s56kt\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130403 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-etcd-serving-ca\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.130481 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.132175 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.132717 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.132797 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.132972 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.133384 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.133482 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.133562 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.139010 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.139165 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xp79b"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.139791 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.140455 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.141419 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9blst"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.143267 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.143339 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.144711 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.144762 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5vb7d"] Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.146266 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 16:58:37 crc kubenswrapper[4853]: I1209 16:58:37.146301 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.120489 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123268 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-serving-cert\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123331 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf38b39-12f9-48ea-81dc-1e39a057074a-serving-cert\") pod \"openshift-config-operator-7777fb866f-dq8p9\" (UID: \"3bf38b39-12f9-48ea-81dc-1e39a057074a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123376 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzrb7\" (UniqueName: \"kubernetes.io/projected/5ac21b02-cdf3-4f92-8f7b-898015277e7a-kube-api-access-zzrb7\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123414 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123449 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk9ht\" (UniqueName: \"kubernetes.io/projected/3bf38b39-12f9-48ea-81dc-1e39a057074a-kube-api-access-tk9ht\") pod \"openshift-config-operator-7777fb866f-dq8p9\" (UID: \"3bf38b39-12f9-48ea-81dc-1e39a057074a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123483 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-service-ca\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123512 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt29v\" (UniqueName: \"kubernetes.io/projected/098686c6-8100-46dd-ae99-4576faf0d50e-kube-api-access-mt29v\") pod \"cluster-samples-operator-665b6dd947-nqxhf\" (UID: \"098686c6-8100-46dd-ae99-4576faf0d50e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123667 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-serving-cert\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123699 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-trusted-ca-bundle\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123729 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d424076c-9966-48fc-94c8-9932dccc8658-auth-proxy-config\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123759 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-config\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123799 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2c3869-682e-4c32-b325-afd38cd76667-config\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.123835 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37f050ab-b03b-4082-b819-e1bef9642e87-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.124128 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s56kt\" (UniqueName: \"kubernetes.io/projected/a93d1b37-5404-44ed-83de-51eb04c1b2c4-kube-api-access-s56kt\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.124178 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-etcd-serving-ca\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.124214 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.124421 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vwzjm\" (UID: \"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.124493 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-oauth-config\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.125215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2c3869-682e-4c32-b325-afd38cd76667-service-ca-bundle\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.125420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-config\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.125937 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqjtz\" (UniqueName: \"kubernetes.io/projected/3d2c3869-682e-4c32-b325-afd38cd76667-kube-api-access-bqjtz\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126207 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2b1f86c-b414-418d-973d-6db3442a1bd1-serving-cert\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126258 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-oauth-serving-cert\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2c3869-682e-4c32-b325-afd38cd76667-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126331 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b2b1f86c-b414-418d-973d-6db3442a1bd1-audit-dir\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126377 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4pw\" (UniqueName: \"kubernetes.io/projected/5b33f6ba-88ff-4fd0-876d-871cf36db1cf-kube-api-access-st4pw\") pod \"downloads-7954f5f757-ccm5l\" (UID: \"5b33f6ba-88ff-4fd0-876d-871cf36db1cf\") " pod="openshift-console/downloads-7954f5f757-ccm5l" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126415 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1112df0-82ff-460a-b9c3-e3bc662862d1-config\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126455 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-audit-policies\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126490 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126511 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-etcd-serving-ca\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126531 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3bf38b39-12f9-48ea-81dc-1e39a057074a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dq8p9\" (UID: \"3bf38b39-12f9-48ea-81dc-1e39a057074a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126589 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t945w\" (UniqueName: \"kubernetes.io/projected/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-kube-api-access-t945w\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126717 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n5jk\" (UniqueName: \"kubernetes.io/projected/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-kube-api-access-8n5jk\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126749 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1112df0-82ff-460a-b9c3-e3bc662862d1-trusted-ca\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126774 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126802 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-image-import-ca\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126828 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b2b1f86c-b414-418d-973d-6db3442a1bd1-encryption-config\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-client-ca\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.126981 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127015 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-serving-cert\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127043 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlw4j\" (UniqueName: \"kubernetes.io/projected/fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f-kube-api-access-dlw4j\") pod \"openshift-apiserver-operator-796bbdcf4f-vwzjm\" (UID: \"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127089 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d1b37-5404-44ed-83de-51eb04c1b2c4-config\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127118 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-config\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127143 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-console-config\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127170 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4q6k\" (UniqueName: \"kubernetes.io/projected/71b908be-495e-4eb2-8429-56c89e4344f4-kube-api-access-c4q6k\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127311 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-audit\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127342 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b62fx\" (UniqueName: \"kubernetes.io/projected/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-kube-api-access-b62fx\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127774 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vwzjm\" (UID: \"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127814 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127853 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-config\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127878 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.127906 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128103 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d2c3869-682e-4c32-b325-afd38cd76667-serving-cert\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128132 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37f050ab-b03b-4082-b819-e1bef9642e87-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128155 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128182 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1112df0-82ff-460a-b9c3-e3bc662862d1-serving-cert\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128214 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a93d1b37-5404-44ed-83de-51eb04c1b2c4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128244 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128297 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-client-ca\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs2jj\" (UniqueName: \"kubernetes.io/projected/37f050ab-b03b-4082-b819-e1bef9642e87-kube-api-access-cs2jj\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128431 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac21b02-cdf3-4f92-8f7b-898015277e7a-serving-cert\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128461 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/37f050ab-b03b-4082-b819-e1bef9642e87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128493 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrhq2\" (UniqueName: \"kubernetes.io/projected/b2b1f86c-b414-418d-973d-6db3442a1bd1-kube-api-access-vrhq2\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128518 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128546 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128574 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d69pz\" (UniqueName: \"kubernetes.io/projected/d424076c-9966-48fc-94c8-9932dccc8658-kube-api-access-d69pz\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128619 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn6ml\" (UniqueName: \"kubernetes.io/projected/e1112df0-82ff-460a-b9c3-e3bc662862d1-kube-api-access-hn6ml\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128772 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-etcd-client\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128798 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128828 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b2b1f86c-b414-418d-973d-6db3442a1bd1-node-pullsecrets\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128855 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/098686c6-8100-46dd-ae99-4576faf0d50e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nqxhf\" (UID: \"098686c6-8100-46dd-ae99-4576faf0d50e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128884 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128934 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-policies\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.128960 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-encryption-config\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.129279 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-audit-dir\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.129311 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a93d1b37-5404-44ed-83de-51eb04c1b2c4-images\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.129352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.129381 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d424076c-9966-48fc-94c8-9932dccc8658-config\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.129413 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2b1f86c-b414-418d-973d-6db3442a1bd1-etcd-client\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.129435 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-dir\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.129463 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d424076c-9966-48fc-94c8-9932dccc8658-machine-approver-tls\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.130306 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d1b37-5404-44ed-83de-51eb04c1b2c4-config\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.130354 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b2b1f86c-b414-418d-973d-6db3442a1bd1-audit-dir\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.130880 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-audit\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.131943 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-config\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.136447 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-image-import-ca\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.145688 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-client-ca\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.146866 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b2b1f86c-b414-418d-973d-6db3442a1bd1-node-pullsecrets\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.148412 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a93d1b37-5404-44ed-83de-51eb04c1b2c4-images\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.151393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b1f86c-b414-418d-973d-6db3442a1bd1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.152313 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2b1f86c-b414-418d-973d-6db3442a1bd1-etcd-client\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.165986 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57bs5"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.167882 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5grrn"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.169794 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.169849 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.171777 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.172232 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.172339 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.173102 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.173280 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.173360 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.173689 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.173907 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.179684 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b2b1f86c-b414-418d-973d-6db3442a1bd1-encryption-config\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.187388 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2b1f86c-b414-418d-973d-6db3442a1bd1-serving-cert\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.190173 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a93d1b37-5404-44ed-83de-51eb04c1b2c4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.190959 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.195828 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.196406 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.196917 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.196408 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.198208 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.198875 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s56kt\" (UniqueName: \"kubernetes.io/projected/a93d1b37-5404-44ed-83de-51eb04c1b2c4-kube-api-access-s56kt\") pod \"machine-api-operator-5694c8668f-zgd7r\" (UID: \"a93d1b37-5404-44ed-83de-51eb04c1b2c4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.199128 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.199845 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrhq2\" (UniqueName: \"kubernetes.io/projected/b2b1f86c-b414-418d-973d-6db3442a1bd1-kube-api-access-vrhq2\") pod \"apiserver-76f77b778f-ktltc\" (UID: \"b2b1f86c-b414-418d-973d-6db3442a1bd1\") " pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.201110 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.201676 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.202759 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.203132 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-serving-cert\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.203766 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.203913 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.204116 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ffzns"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.204722 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n5jk\" (UniqueName: \"kubernetes.io/projected/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-kube-api-access-8n5jk\") pod \"route-controller-manager-6576b87f9c-7sm7j\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.204838 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.205823 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6cmx8"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.208494 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.210405 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.210950 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.211318 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.211534 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.212097 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.212481 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cw4kq"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.213164 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.218174 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.218202 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.218480 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.218585 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.218656 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.218742 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.218922 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.218955 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219086 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219107 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219169 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219235 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219338 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219421 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219439 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219530 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219623 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219655 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219816 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.219978 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.220150 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.220262 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.220431 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.222747 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.222920 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.223081 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.223289 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.223432 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.223558 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.223785 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.223946 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.224138 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4bfgv"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.225058 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.230854 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.231125 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.231286 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.231447 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.249316 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.250243 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.250692 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.251313 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.260118 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.262935 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-policies\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.262983 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-encryption-config\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263008 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-audit-dir\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263036 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca85555-03f6-4585-bef7-a30cfdce8a59-etcd-service-ca\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263064 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d1afc3-f774-4b6c-8dc5-ca20727f4203-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-984fm\" (UID: \"42d1afc3-f774-4b6c-8dc5-ca20727f4203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263092 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-826zl\" (UniqueName: \"kubernetes.io/projected/7585c230-8db6-45bd-bd39-17d27ff826dd-kube-api-access-826zl\") pod \"dns-default-5grrn\" (UID: \"7585c230-8db6-45bd-bd39-17d27ff826dd\") " pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263118 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263145 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d424076c-9966-48fc-94c8-9932dccc8658-config\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263172 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-dir\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d424076c-9966-48fc-94c8-9932dccc8658-machine-approver-tls\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263213 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca85555-03f6-4585-bef7-a30cfdce8a59-serving-cert\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263237 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8deaa879-6b9f-4b7b-8cef-f48461d13c5f-metrics-tls\") pod \"dns-operator-744455d44c-4bfgv\" (UID: \"8deaa879-6b9f-4b7b-8cef-f48461d13c5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf38b39-12f9-48ea-81dc-1e39a057074a-serving-cert\") pod \"openshift-config-operator-7777fb866f-dq8p9\" (UID: \"3bf38b39-12f9-48ea-81dc-1e39a057074a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263283 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzrb7\" (UniqueName: \"kubernetes.io/projected/5ac21b02-cdf3-4f92-8f7b-898015277e7a-kube-api-access-zzrb7\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263314 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263337 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk9ht\" (UniqueName: \"kubernetes.io/projected/3bf38b39-12f9-48ea-81dc-1e39a057074a-kube-api-access-tk9ht\") pod \"openshift-config-operator-7777fb866f-dq8p9\" (UID: \"3bf38b39-12f9-48ea-81dc-1e39a057074a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263357 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-service-ca\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263379 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt29v\" (UniqueName: \"kubernetes.io/projected/098686c6-8100-46dd-ae99-4576faf0d50e-kube-api-access-mt29v\") pod \"cluster-samples-operator-665b6dd947-nqxhf\" (UID: \"098686c6-8100-46dd-ae99-4576faf0d50e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263409 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdssg\" (UniqueName: \"kubernetes.io/projected/3190e757-5290-4ff7-93ba-56703960ed28-kube-api-access-qdssg\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263434 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-serving-cert\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263456 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-trusted-ca-bundle\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263478 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d424076c-9966-48fc-94c8-9932dccc8658-auth-proxy-config\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263519 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2c3869-682e-4c32-b325-afd38cd76667-config\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263541 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37f050ab-b03b-4082-b819-e1bef9642e87-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263565 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kktlh\" (UniqueName: \"kubernetes.io/projected/8deaa879-6b9f-4b7b-8cef-f48461d13c5f-kube-api-access-kktlh\") pod \"dns-operator-744455d44c-4bfgv\" (UID: \"8deaa879-6b9f-4b7b-8cef-f48461d13c5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263588 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7585c230-8db6-45bd-bd39-17d27ff826dd-config-volume\") pod \"dns-default-5grrn\" (UID: \"7585c230-8db6-45bd-bd39-17d27ff826dd\") " pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263638 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8892469-e13f-4dcf-ab96-106be91ab901-service-ca-bundle\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263700 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263738 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vwzjm\" (UID: \"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263786 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42d1afc3-f774-4b6c-8dc5-ca20727f4203-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-984fm\" (UID: \"42d1afc3-f774-4b6c-8dc5-ca20727f4203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263806 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-policies\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263818 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gc9\" (UniqueName: \"kubernetes.io/projected/d8892469-e13f-4dcf-ab96-106be91ab901-kube-api-access-j6gc9\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263842 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-oauth-config\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.263865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2c3869-682e-4c32-b325-afd38cd76667-service-ca-bundle\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264128 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqjtz\" (UniqueName: \"kubernetes.io/projected/3d2c3869-682e-4c32-b325-afd38cd76667-kube-api-access-bqjtz\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264153 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8892469-e13f-4dcf-ab96-106be91ab901-metrics-certs\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264178 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-oauth-serving-cert\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264198 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2c3869-682e-4c32-b325-afd38cd76667-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264221 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d1afc3-f774-4b6c-8dc5-ca20727f4203-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-984fm\" (UID: \"42d1afc3-f774-4b6c-8dc5-ca20727f4203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st4pw\" (UniqueName: \"kubernetes.io/projected/5b33f6ba-88ff-4fd0-876d-871cf36db1cf-kube-api-access-st4pw\") pod \"downloads-7954f5f757-ccm5l\" (UID: \"5b33f6ba-88ff-4fd0-876d-871cf36db1cf\") " pod="openshift-console/downloads-7954f5f757-ccm5l" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264288 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1112df0-82ff-460a-b9c3-e3bc662862d1-config\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264310 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-audit-policies\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264335 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3bf38b39-12f9-48ea-81dc-1e39a057074a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dq8p9\" (UID: \"3bf38b39-12f9-48ea-81dc-1e39a057074a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264403 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t945w\" (UniqueName: \"kubernetes.io/projected/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-kube-api-access-t945w\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264430 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3190e757-5290-4ff7-93ba-56703960ed28-trusted-ca\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264458 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8rzp\" (UniqueName: \"kubernetes.io/projected/f8f3c87c-9080-4011-97f8-2d04ddeee5f6-kube-api-access-q8rzp\") pod \"openshift-controller-manager-operator-756b6f6bc6-svwfg\" (UID: \"f8f3c87c-9080-4011-97f8-2d04ddeee5f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264503 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1112df0-82ff-460a-b9c3-e3bc662862d1-trusted-ca\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264551 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smlnw\" (UniqueName: \"kubernetes.io/projected/6ca85555-03f6-4585-bef7-a30cfdce8a59-kube-api-access-smlnw\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264577 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3190e757-5290-4ff7-93ba-56703960ed28-metrics-tls\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264623 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-client-ca\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264649 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d8892469-e13f-4dcf-ab96-106be91ab901-stats-auth\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264688 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264713 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-serving-cert\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264739 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlw4j\" (UniqueName: \"kubernetes.io/projected/fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f-kube-api-access-dlw4j\") pod \"openshift-apiserver-operator-796bbdcf4f-vwzjm\" (UID: \"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264760 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d8892469-e13f-4dcf-ab96-106be91ab901-default-certificate\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264803 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-config\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264825 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a91e971-b137-461e-b373-9b44ded89e8e-proxy-tls\") pod \"machine-config-controller-84d6567774-4gqhk\" (UID: \"9a91e971-b137-461e-b373-9b44ded89e8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264847 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-console-config\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264872 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4q6k\" (UniqueName: \"kubernetes.io/projected/71b908be-495e-4eb2-8429-56c89e4344f4-kube-api-access-c4q6k\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264899 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b62fx\" (UniqueName: \"kubernetes.io/projected/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-kube-api-access-b62fx\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264931 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vwzjm\" (UID: \"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264956 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a91e971-b137-461e-b373-9b44ded89e8e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4gqhk\" (UID: \"9a91e971-b137-461e-b373-9b44ded89e8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.264979 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265005 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca85555-03f6-4585-bef7-a30cfdce8a59-config\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265032 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265055 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265080 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d2c3869-682e-4c32-b325-afd38cd76667-serving-cert\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265099 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37f050ab-b03b-4082-b819-e1bef9642e87-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265121 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3190e757-5290-4ff7-93ba-56703960ed28-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265143 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265163 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1112df0-82ff-460a-b9c3-e3bc662862d1-serving-cert\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265182 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ca85555-03f6-4585-bef7-a30cfdce8a59-etcd-client\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265205 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f3c87c-9080-4011-97f8-2d04ddeee5f6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-svwfg\" (UID: \"f8f3c87c-9080-4011-97f8-2d04ddeee5f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265249 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs2jj\" (UniqueName: \"kubernetes.io/projected/37f050ab-b03b-4082-b819-e1bef9642e87-kube-api-access-cs2jj\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265273 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac21b02-cdf3-4f92-8f7b-898015277e7a-serving-cert\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265296 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/37f050ab-b03b-4082-b819-e1bef9642e87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265318 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ca85555-03f6-4585-bef7-a30cfdce8a59-etcd-ca\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265342 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265361 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265382 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d69pz\" (UniqueName: \"kubernetes.io/projected/d424076c-9966-48fc-94c8-9932dccc8658-kube-api-access-d69pz\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265406 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7585c230-8db6-45bd-bd39-17d27ff826dd-metrics-tls\") pod \"dns-default-5grrn\" (UID: \"7585c230-8db6-45bd-bd39-17d27ff826dd\") " pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265428 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn6ml\" (UniqueName: \"kubernetes.io/projected/e1112df0-82ff-460a-b9c3-e3bc662862d1-kube-api-access-hn6ml\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265449 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-etcd-client\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265473 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265497 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f3c87c-9080-4011-97f8-2d04ddeee5f6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-svwfg\" (UID: \"f8f3c87c-9080-4011-97f8-2d04ddeee5f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265524 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/098686c6-8100-46dd-ae99-4576faf0d50e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nqxhf\" (UID: \"098686c6-8100-46dd-ae99-4576faf0d50e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265550 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.265573 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l457g\" (UniqueName: \"kubernetes.io/projected/9a91e971-b137-461e-b373-9b44ded89e8e-kube-api-access-l457g\") pod \"machine-config-controller-84d6567774-4gqhk\" (UID: \"9a91e971-b137-461e-b373-9b44ded89e8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.267294 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.272240 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.272263 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-encryption-config\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.272363 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-audit-dir\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.273405 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2c3869-682e-4c32-b325-afd38cd76667-service-ca-bundle\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.273679 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.273911 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d2c3869-682e-4c32-b325-afd38cd76667-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.274049 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1112df0-82ff-460a-b9c3-e3bc662862d1-trusted-ca\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.274753 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2c3869-682e-4c32-b325-afd38cd76667-config\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.275910 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-trusted-ca-bundle\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.278123 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-service-ca\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.279047 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vwzjm\" (UID: \"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.280122 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qm8cq"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.280367 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.280743 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.280937 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.281336 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.281667 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.281849 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.282128 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gv85n"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.286527 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.287301 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.287627 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.294143 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.319315 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w78wr"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.323453 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.324065 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.324161 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/37f050ab-b03b-4082-b819-e1bef9642e87-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.324335 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37f050ab-b03b-4082-b819-e1bef9642e87-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.324532 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.324887 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-serving-cert\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.325753 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.326292 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.326614 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.327402 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-oauth-config\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.327933 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.328325 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.328699 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.329725 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vwzjm\" (UID: \"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.330818 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d424076c-9966-48fc-94c8-9932dccc8658-auth-proxy-config\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.331279 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.332970 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf38b39-12f9-48ea-81dc-1e39a057074a-serving-cert\") pod \"openshift-config-operator-7777fb866f-dq8p9\" (UID: \"3bf38b39-12f9-48ea-81dc-1e39a057074a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.333055 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac21b02-cdf3-4f92-8f7b-898015277e7a-serving-cert\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.334066 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-oauth-serving-cert\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.335097 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-client-ca\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.335542 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.336868 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.337695 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3bf38b39-12f9-48ea-81dc-1e39a057074a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dq8p9\" (UID: \"3bf38b39-12f9-48ea-81dc-1e39a057074a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.338349 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.339201 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.342071 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.342779 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d424076c-9966-48fc-94c8-9932dccc8658-config\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.342840 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-dir\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.343086 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.343185 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.343486 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.343852 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.344077 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.344681 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.345760 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-audit-policies\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.346985 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-etcd-client\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.347502 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1112df0-82ff-460a-b9c3-e3bc662862d1-serving-cert\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.348152 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/098686c6-8100-46dd-ae99-4576faf0d50e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nqxhf\" (UID: \"098686c6-8100-46dd-ae99-4576faf0d50e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.349997 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d2c3869-682e-4c32-b325-afd38cd76667-serving-cert\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.352227 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-serving-cert\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.353240 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-console-config\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.353888 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.354463 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.354821 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d424076c-9966-48fc-94c8-9932dccc8658-machine-approver-tls\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.355403 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.356498 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g2xml"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.357447 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.358095 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.358676 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-config\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.359882 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1112df0-82ff-460a-b9c3-e3bc662862d1-config\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.360105 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.363471 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hvkhf"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.364440 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.365257 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xl5jc"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.366508 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xl5jc" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.368680 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ca85555-03f6-4585-bef7-a30cfdce8a59-etcd-ca\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.368735 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7585c230-8db6-45bd-bd39-17d27ff826dd-metrics-tls\") pod \"dns-default-5grrn\" (UID: \"7585c230-8db6-45bd-bd39-17d27ff826dd\") " pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.368783 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f3c87c-9080-4011-97f8-2d04ddeee5f6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-svwfg\" (UID: \"f8f3c87c-9080-4011-97f8-2d04ddeee5f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.368827 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l457g\" (UniqueName: \"kubernetes.io/projected/9a91e971-b137-461e-b373-9b44ded89e8e-kube-api-access-l457g\") pod \"machine-config-controller-84d6567774-4gqhk\" (UID: \"9a91e971-b137-461e-b373-9b44ded89e8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.368859 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca85555-03f6-4585-bef7-a30cfdce8a59-etcd-service-ca\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.368986 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d1afc3-f774-4b6c-8dc5-ca20727f4203-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-984fm\" (UID: \"42d1afc3-f774-4b6c-8dc5-ca20727f4203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369054 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-826zl\" (UniqueName: \"kubernetes.io/projected/7585c230-8db6-45bd-bd39-17d27ff826dd-kube-api-access-826zl\") pod \"dns-default-5grrn\" (UID: \"7585c230-8db6-45bd-bd39-17d27ff826dd\") " pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369084 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca85555-03f6-4585-bef7-a30cfdce8a59-serving-cert\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369109 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8deaa879-6b9f-4b7b-8cef-f48461d13c5f-metrics-tls\") pod \"dns-operator-744455d44c-4bfgv\" (UID: \"8deaa879-6b9f-4b7b-8cef-f48461d13c5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369241 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdssg\" (UniqueName: \"kubernetes.io/projected/3190e757-5290-4ff7-93ba-56703960ed28-kube-api-access-qdssg\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369355 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kktlh\" (UniqueName: \"kubernetes.io/projected/8deaa879-6b9f-4b7b-8cef-f48461d13c5f-kube-api-access-kktlh\") pod \"dns-operator-744455d44c-4bfgv\" (UID: \"8deaa879-6b9f-4b7b-8cef-f48461d13c5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369549 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7585c230-8db6-45bd-bd39-17d27ff826dd-config-volume\") pod \"dns-default-5grrn\" (UID: \"7585c230-8db6-45bd-bd39-17d27ff826dd\") " pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8892469-e13f-4dcf-ab96-106be91ab901-service-ca-bundle\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42d1afc3-f774-4b6c-8dc5-ca20727f4203-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-984fm\" (UID: \"42d1afc3-f774-4b6c-8dc5-ca20727f4203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369677 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6gc9\" (UniqueName: \"kubernetes.io/projected/d8892469-e13f-4dcf-ab96-106be91ab901-kube-api-access-j6gc9\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369850 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8892469-e13f-4dcf-ab96-106be91ab901-metrics-certs\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369894 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d1afc3-f774-4b6c-8dc5-ca20727f4203-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-984fm\" (UID: \"42d1afc3-f774-4b6c-8dc5-ca20727f4203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369946 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3190e757-5290-4ff7-93ba-56703960ed28-trusted-ca\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.369970 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8rzp\" (UniqueName: \"kubernetes.io/projected/f8f3c87c-9080-4011-97f8-2d04ddeee5f6-kube-api-access-q8rzp\") pod \"openshift-controller-manager-operator-756b6f6bc6-svwfg\" (UID: \"f8f3c87c-9080-4011-97f8-2d04ddeee5f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370003 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smlnw\" (UniqueName: \"kubernetes.io/projected/6ca85555-03f6-4585-bef7-a30cfdce8a59-kube-api-access-smlnw\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370036 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3190e757-5290-4ff7-93ba-56703960ed28-metrics-tls\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370064 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d8892469-e13f-4dcf-ab96-106be91ab901-stats-auth\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370109 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d8892469-e13f-4dcf-ab96-106be91ab901-default-certificate\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370182 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ca85555-03f6-4585-bef7-a30cfdce8a59-etcd-ca\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370250 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca85555-03f6-4585-bef7-a30cfdce8a59-etcd-service-ca\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370361 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7585c230-8db6-45bd-bd39-17d27ff826dd-config-volume\") pod \"dns-default-5grrn\" (UID: \"7585c230-8db6-45bd-bd39-17d27ff826dd\") " pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370161 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a91e971-b137-461e-b373-9b44ded89e8e-proxy-tls\") pod \"machine-config-controller-84d6567774-4gqhk\" (UID: \"9a91e971-b137-461e-b373-9b44ded89e8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370910 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a91e971-b137-461e-b373-9b44ded89e8e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4gqhk\" (UID: \"9a91e971-b137-461e-b373-9b44ded89e8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.370962 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca85555-03f6-4585-bef7-a30cfdce8a59-config\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.371010 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3190e757-5290-4ff7-93ba-56703960ed28-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.371036 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ca85555-03f6-4585-bef7-a30cfdce8a59-etcd-client\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.371053 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d1afc3-f774-4b6c-8dc5-ca20727f4203-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-984fm\" (UID: \"42d1afc3-f774-4b6c-8dc5-ca20727f4203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.371065 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f3c87c-9080-4011-97f8-2d04ddeee5f6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-svwfg\" (UID: \"f8f3c87c-9080-4011-97f8-2d04ddeee5f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.372423 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8892469-e13f-4dcf-ab96-106be91ab901-service-ca-bundle\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.372494 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7585c230-8db6-45bd-bd39-17d27ff826dd-metrics-tls\") pod \"dns-default-5grrn\" (UID: \"7585c230-8db6-45bd-bd39-17d27ff826dd\") " pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.372846 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3190e757-5290-4ff7-93ba-56703960ed28-trusted-ca\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.373166 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca85555-03f6-4585-bef7-a30cfdce8a59-serving-cert\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.373210 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.374109 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca85555-03f6-4585-bef7-a30cfdce8a59-config\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.374641 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a91e971-b137-461e-b373-9b44ded89e8e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4gqhk\" (UID: \"9a91e971-b137-461e-b373-9b44ded89e8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.374886 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f3c87c-9080-4011-97f8-2d04ddeee5f6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-svwfg\" (UID: \"f8f3c87c-9080-4011-97f8-2d04ddeee5f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.376234 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.377016 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d8892469-e13f-4dcf-ab96-106be91ab901-default-certificate\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.377700 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ca85555-03f6-4585-bef7-a30cfdce8a59-etcd-client\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.377767 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f3c87c-9080-4011-97f8-2d04ddeee5f6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-svwfg\" (UID: \"f8f3c87c-9080-4011-97f8-2d04ddeee5f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.377838 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d8892469-e13f-4dcf-ab96-106be91ab901-stats-auth\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.378184 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3190e757-5290-4ff7-93ba-56703960ed28-metrics-tls\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.378485 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8892469-e13f-4dcf-ab96-106be91ab901-metrics-certs\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.379075 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57bs5"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.379434 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d1afc3-f774-4b6c-8dc5-ca20727f4203-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-984fm\" (UID: \"42d1afc3-f774-4b6c-8dc5-ca20727f4203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.381437 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5grrn"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.381522 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a91e971-b137-461e-b373-9b44ded89e8e-proxy-tls\") pod \"machine-config-controller-84d6567774-4gqhk\" (UID: \"9a91e971-b137-461e-b373-9b44ded89e8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.384967 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.385533 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.388105 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.389444 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6cmx8"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.392007 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.393654 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.395422 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cw4kq"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.397908 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.398641 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.398861 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4bfgv"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.400948 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.402751 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.404679 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qm8cq"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.407646 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.408807 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w78wr"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.410297 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.412159 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g2xml"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.414697 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gv85n"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.415864 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xl5jc"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.417105 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.417974 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.419233 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.420666 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.437780 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.442368 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8deaa879-6b9f-4b7b-8cef-f48461d13c5f-metrics-tls\") pod \"dns-operator-744455d44c-4bfgv\" (UID: \"8deaa879-6b9f-4b7b-8cef-f48461d13c5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.464481 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.498156 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.518318 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.538664 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.557648 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.578857 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.612700 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzrb7\" (UniqueName: \"kubernetes.io/projected/5ac21b02-cdf3-4f92-8f7b-898015277e7a-kube-api-access-zzrb7\") pod \"controller-manager-879f6c89f-qvjb6\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.634252 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b62fx\" (UniqueName: \"kubernetes.io/projected/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-kube-api-access-b62fx\") pod \"oauth-openshift-558db77b4-5vb7d\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.653281 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs2jj\" (UniqueName: \"kubernetes.io/projected/37f050ab-b03b-4082-b819-e1bef9642e87-kube-api-access-cs2jj\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.664860 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.675119 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk9ht\" (UniqueName: \"kubernetes.io/projected/3bf38b39-12f9-48ea-81dc-1e39a057074a-kube-api-access-tk9ht\") pod \"openshift-config-operator-7777fb866f-dq8p9\" (UID: \"3bf38b39-12f9-48ea-81dc-1e39a057074a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.677031 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zgd7r"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.692686 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt29v\" (UniqueName: \"kubernetes.io/projected/098686c6-8100-46dd-ae99-4576faf0d50e-kube-api-access-mt29v\") pod \"cluster-samples-operator-665b6dd947-nqxhf\" (UID: \"098686c6-8100-46dd-ae99-4576faf0d50e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.698184 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.718626 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.741243 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.747950 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.758040 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.789930 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.797641 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.798913 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.808881 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ktltc"] Dec 09 16:58:38 crc kubenswrapper[4853]: W1209 16:58:38.810389 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd00d6d_d30c_49c5_aa61_3392f0b29a86.slice/crio-fd3802de2b612c33b348fde56330af6595d7b48b120b0ecd8a7416e49d4a1985 WatchSource:0}: Error finding container fd3802de2b612c33b348fde56330af6595d7b48b120b0ecd8a7416e49d4a1985: Status 404 returned error can't find the container with id fd3802de2b612c33b348fde56330af6595d7b48b120b0ecd8a7416e49d4a1985 Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.820084 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.837824 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.843947 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvjb6"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.858159 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.879503 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 16:58:38 crc kubenswrapper[4853]: W1209 16:58:38.893857 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ac21b02_cdf3_4f92_8f7b_898015277e7a.slice/crio-dcda289da33a93e3227420e849f433c2e5dc668cc9613be96f35dd332627871f WatchSource:0}: Error finding container dcda289da33a93e3227420e849f433c2e5dc668cc9613be96f35dd332627871f: Status 404 returned error can't find the container with id dcda289da33a93e3227420e849f433c2e5dc668cc9613be96f35dd332627871f Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.898145 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.922006 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.938297 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.941555 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.947580 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.957530 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.968236 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5vb7d"] Dec 09 16:58:38 crc kubenswrapper[4853]: I1209 16:58:38.978667 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.018143 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d69pz\" (UniqueName: \"kubernetes.io/projected/d424076c-9966-48fc-94c8-9932dccc8658-kube-api-access-d69pz\") pod \"machine-approver-56656f9798-9sbmm\" (UID: \"d424076c-9966-48fc-94c8-9932dccc8658\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.019138 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.040894 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.059210 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.078478 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.099010 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.118825 4853 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.139132 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.159369 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.172124 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" event={"ID":"7f4d7735-71ec-48b9-b4dc-017a983a2e2c","Type":"ContainerStarted","Data":"d30cc3e7ba72447a87486e701f98c9fccc33e94f665487c6b786e826ba7fcfbb"} Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.174567 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" event={"ID":"5ac21b02-cdf3-4f92-8f7b-898015277e7a","Type":"ContainerStarted","Data":"00ee9d6714d3d2049cf11288d32e949f6a6a5ee09d25e202df592da63210ccfa"} Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.174635 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" event={"ID":"5ac21b02-cdf3-4f92-8f7b-898015277e7a","Type":"ContainerStarted","Data":"dcda289da33a93e3227420e849f433c2e5dc668cc9613be96f35dd332627871f"} Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.176255 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" event={"ID":"a93d1b37-5404-44ed-83de-51eb04c1b2c4","Type":"ContainerStarted","Data":"38a387f43463dd90aed571cee6ebcfd23744d179cca2c305603be5b617a3f98b"} Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.176286 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" event={"ID":"a93d1b37-5404-44ed-83de-51eb04c1b2c4","Type":"ContainerStarted","Data":"37b461388a3e0baf4a4de3668a8453a7feca4e166d204e41f63197d53a955a65"} Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.176301 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" event={"ID":"a93d1b37-5404-44ed-83de-51eb04c1b2c4","Type":"ContainerStarted","Data":"19bc625f1dd0e563603003e99a522502137ff8a7590dd429cd5f7cbb69584be8"} Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.182181 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" event={"ID":"2bd00d6d-d30c-49c5-aa61-3392f0b29a86","Type":"ContainerStarted","Data":"e80ce451813c3b13a5092780abffc1f0b5e3c40bc7fdf2bd28670d973bd25684"} Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.182214 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" event={"ID":"2bd00d6d-d30c-49c5-aa61-3392f0b29a86","Type":"ContainerStarted","Data":"fd3802de2b612c33b348fde56330af6595d7b48b120b0ecd8a7416e49d4a1985"} Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.184052 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.184155 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf"] Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.185337 4853 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7sm7j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.185369 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" podUID="2bd00d6d-d30c-49c5-aa61-3392f0b29a86" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.188203 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" event={"ID":"b2b1f86c-b414-418d-973d-6db3442a1bd1","Type":"ContainerStarted","Data":"7dc28ae7a9d0e9e279c0cce0341e3e25974f5d4ffaed1341373dd491f1143eb8"} Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.199407 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqjtz\" (UniqueName: \"kubernetes.io/projected/3d2c3869-682e-4c32-b325-afd38cd76667-kube-api-access-bqjtz\") pod \"authentication-operator-69f744f599-9blst\" (UID: \"3d2c3869-682e-4c32-b325-afd38cd76667\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.214589 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st4pw\" (UniqueName: \"kubernetes.io/projected/5b33f6ba-88ff-4fd0-876d-871cf36db1cf-kube-api-access-st4pw\") pod \"downloads-7954f5f757-ccm5l\" (UID: \"5b33f6ba-88ff-4fd0-876d-871cf36db1cf\") " pod="openshift-console/downloads-7954f5f757-ccm5l" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.222718 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9"] Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.229730 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.233659 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t945w\" (UniqueName: \"kubernetes.io/projected/dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74-kube-api-access-t945w\") pod \"apiserver-7bbb656c7d-4dwl5\" (UID: \"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:39 crc kubenswrapper[4853]: W1209 16:58:39.253293 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf38b39_12f9_48ea_81dc_1e39a057074a.slice/crio-3fc8d3ebe4baad84263797b816961b572afe4581cafe7acb08a92afe9c02e27d WatchSource:0}: Error finding container 3fc8d3ebe4baad84263797b816961b572afe4581cafe7acb08a92afe9c02e27d: Status 404 returned error can't find the container with id 3fc8d3ebe4baad84263797b816961b572afe4581cafe7acb08a92afe9c02e27d Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.255685 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.265874 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37f050ab-b03b-4082-b819-e1bef9642e87-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-z8mb7\" (UID: \"37f050ab-b03b-4082-b819-e1bef9642e87\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.274252 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlw4j\" (UniqueName: \"kubernetes.io/projected/fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f-kube-api-access-dlw4j\") pod \"openshift-apiserver-operator-796bbdcf4f-vwzjm\" (UID: \"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.294973 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn6ml\" (UniqueName: \"kubernetes.io/projected/e1112df0-82ff-460a-b9c3-e3bc662862d1-kube-api-access-hn6ml\") pod \"console-operator-58897d9998-jk2pg\" (UID: \"e1112df0-82ff-460a-b9c3-e3bc662862d1\") " pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.317408 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4q6k\" (UniqueName: \"kubernetes.io/projected/71b908be-495e-4eb2-8429-56c89e4344f4-kube-api-access-c4q6k\") pod \"console-f9d7485db-xp79b\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.325303 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.339231 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.351126 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ccm5l" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.357922 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.376688 4853 request.go:700] Waited for 1.018893646s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0 Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.378833 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.397810 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9blst"] Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.398181 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.417731 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.417827 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.438508 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.446215 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.457430 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.478140 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.488471 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:39 crc kubenswrapper[4853]: E1209 16:58:39.488907 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 17:00:41.4888528 +0000 UTC m=+268.423591982 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.489140 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.489234 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.491138 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.501251 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.501679 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.507695 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:39 crc kubenswrapper[4853]: W1209 16:58:39.513126 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d2c3869_682e_4c32_b325_afd38cd76667.slice/crio-81da3c2a0ba8115a2aec25a58e9020cca7c18f87462dd562e8a9e792d1ee6f34 WatchSource:0}: Error finding container 81da3c2a0ba8115a2aec25a58e9020cca7c18f87462dd562e8a9e792d1ee6f34: Status 404 returned error can't find the container with id 81da3c2a0ba8115a2aec25a58e9020cca7c18f87462dd562e8a9e792d1ee6f34 Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.518822 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.547022 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.547925 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.559490 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.646591 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.647153 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.647221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.649698 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.671636 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.675212 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l457g\" (UniqueName: \"kubernetes.io/projected/9a91e971-b137-461e-b373-9b44ded89e8e-kube-api-access-l457g\") pod \"machine-config-controller-84d6567774-4gqhk\" (UID: \"9a91e971-b137-461e-b373-9b44ded89e8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.677189 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-826zl\" (UniqueName: \"kubernetes.io/projected/7585c230-8db6-45bd-bd39-17d27ff826dd-kube-api-access-826zl\") pod \"dns-default-5grrn\" (UID: \"7585c230-8db6-45bd-bd39-17d27ff826dd\") " pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.678656 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.684671 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdssg\" (UniqueName: \"kubernetes.io/projected/3190e757-5290-4ff7-93ba-56703960ed28-kube-api-access-qdssg\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.686221 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.693117 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kktlh\" (UniqueName: \"kubernetes.io/projected/8deaa879-6b9f-4b7b-8cef-f48461d13c5f-kube-api-access-kktlh\") pod \"dns-operator-744455d44c-4bfgv\" (UID: \"8deaa879-6b9f-4b7b-8cef-f48461d13c5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.707427 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.787053 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.794431 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.801374 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.812698 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smlnw\" (UniqueName: \"kubernetes.io/projected/6ca85555-03f6-4585-bef7-a30cfdce8a59-kube-api-access-smlnw\") pod \"etcd-operator-b45778765-6cmx8\" (UID: \"6ca85555-03f6-4585-bef7-a30cfdce8a59\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.831852 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6gc9\" (UniqueName: \"kubernetes.io/projected/d8892469-e13f-4dcf-ab96-106be91ab901-kube-api-access-j6gc9\") pod \"router-default-5444994796-ffzns\" (UID: \"d8892469-e13f-4dcf-ab96-106be91ab901\") " pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.839253 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8rzp\" (UniqueName: \"kubernetes.io/projected/f8f3c87c-9080-4011-97f8-2d04ddeee5f6-kube-api-access-q8rzp\") pod \"openshift-controller-manager-operator-756b6f6bc6-svwfg\" (UID: \"f8f3c87c-9080-4011-97f8-2d04ddeee5f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.853796 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42d1afc3-f774-4b6c-8dc5-ca20727f4203-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-984fm\" (UID: \"42d1afc3-f774-4b6c-8dc5-ca20727f4203\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.863639 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866203 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3190e757-5290-4ff7-93ba-56703960ed28-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6lbnv\" (UID: \"3190e757-5290-4ff7-93ba-56703960ed28\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866709 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-tls\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866751 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866773 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6ab11c3-427d-46dd-83e4-038afc30574a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866800 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-certificates\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866825 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf0e66b-9edb-4648-b675-1ffa480ec186-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7f2qt\" (UID: \"0cf0e66b-9edb-4648-b675-1ffa480ec186\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866847 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfbe3f08-01f6-419f-9f29-fcf674b02167-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-smchn\" (UID: \"dfbe3f08-01f6-419f-9f29-fcf674b02167\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866869 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cw4kq\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866896 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-bound-sa-token\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866919 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6ab11c3-427d-46dd-83e4-038afc30574a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866943 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cf0e66b-9edb-4648-b675-1ffa480ec186-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7f2qt\" (UID: \"0cf0e66b-9edb-4648-b675-1ffa480ec186\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866971 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbnn\" (UniqueName: \"kubernetes.io/projected/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-kube-api-access-sbbnn\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.866993 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cf0e66b-9edb-4648-b675-1ffa480ec186-config\") pod \"kube-apiserver-operator-766d6c64bb-7f2qt\" (UID: \"0cf0e66b-9edb-4648-b675-1ffa480ec186\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.867019 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-images\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.867042 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-trusted-ca\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.867066 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbe3f08-01f6-419f-9f29-fcf674b02167-config\") pod \"kube-controller-manager-operator-78b949d7b-smchn\" (UID: \"dfbe3f08-01f6-419f-9f29-fcf674b02167\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.867087 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vb8w\" (UniqueName: \"kubernetes.io/projected/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-kube-api-access-9vb8w\") pod \"marketplace-operator-79b997595-cw4kq\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.867109 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-proxy-tls\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.867129 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cw4kq\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.867173 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfbe3f08-01f6-419f-9f29-fcf674b02167-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-smchn\" (UID: \"dfbe3f08-01f6-419f-9f29-fcf674b02167\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.867195 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrpf4\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-kube-api-access-vrpf4\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.867226 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: E1209 16:58:39.869588 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:40.369573375 +0000 UTC m=+147.304312557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.931245 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.931826 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.967881 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968256 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-certificates\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968290 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf0e66b-9edb-4648-b675-1ffa480ec186-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7f2qt\" (UID: \"0cf0e66b-9edb-4648-b675-1ffa480ec186\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968310 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfbe3f08-01f6-419f-9f29-fcf674b02167-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-smchn\" (UID: \"dfbe3f08-01f6-419f-9f29-fcf674b02167\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968331 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cw4kq\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968352 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-bound-sa-token\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968377 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6ab11c3-427d-46dd-83e4-038afc30574a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968400 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cf0e66b-9edb-4648-b675-1ffa480ec186-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7f2qt\" (UID: \"0cf0e66b-9edb-4648-b675-1ffa480ec186\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968466 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbnn\" (UniqueName: \"kubernetes.io/projected/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-kube-api-access-sbbnn\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968492 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cf0e66b-9edb-4648-b675-1ffa480ec186-config\") pod \"kube-apiserver-operator-766d6c64bb-7f2qt\" (UID: \"0cf0e66b-9edb-4648-b675-1ffa480ec186\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968516 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-images\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-trusted-ca\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968557 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbe3f08-01f6-419f-9f29-fcf674b02167-config\") pod \"kube-controller-manager-operator-78b949d7b-smchn\" (UID: \"dfbe3f08-01f6-419f-9f29-fcf674b02167\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968580 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vb8w\" (UniqueName: \"kubernetes.io/projected/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-kube-api-access-9vb8w\") pod \"marketplace-operator-79b997595-cw4kq\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968613 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-proxy-tls\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968632 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cw4kq\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968669 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfbe3f08-01f6-419f-9f29-fcf674b02167-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-smchn\" (UID: \"dfbe3f08-01f6-419f-9f29-fcf674b02167\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968689 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrpf4\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-kube-api-access-vrpf4\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968732 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-tls\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968766 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.968784 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6ab11c3-427d-46dd-83e4-038afc30574a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.972499 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6ab11c3-427d-46dd-83e4-038afc30574a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: E1209 16:58:39.972737 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:40.472718805 +0000 UTC m=+147.407457987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.973730 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-certificates\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.981439 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.981765 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6ab11c3-427d-46dd-83e4-038afc30574a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.982320 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cw4kq\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.982827 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbe3f08-01f6-419f-9f29-fcf674b02167-config\") pod \"kube-controller-manager-operator-78b949d7b-smchn\" (UID: \"dfbe3f08-01f6-419f-9f29-fcf674b02167\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:39 crc kubenswrapper[4853]: I1209 16:58:39.985211 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cw4kq\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:39.999374 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-images\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:39.999950 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cf0e66b-9edb-4648-b675-1ffa480ec186-config\") pod \"kube-apiserver-operator-766d6c64bb-7f2qt\" (UID: \"0cf0e66b-9edb-4648-b675-1ffa480ec186\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.022820 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.023193 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.026884 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.029866 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-trusted-ca\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.069257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfbe3f08-01f6-419f-9f29-fcf674b02167-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-smchn\" (UID: \"dfbe3f08-01f6-419f-9f29-fcf674b02167\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070292 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c79cc6-b6ae-4e95-b078-c3341d0c6f7a-config\") pod \"service-ca-operator-777779d784-gv85n\" (UID: \"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070330 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9833f054-9348-44a7-80b8-2ecfa3a363ce-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hkdtl\" (UID: \"9833f054-9348-44a7-80b8-2ecfa3a363ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070356 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-webhook-cert\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070380 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6zmm\" (UniqueName: \"kubernetes.io/projected/a902e816-a901-45c7-b806-bb25b1811c09-kube-api-access-g6zmm\") pod \"service-ca-9c57cc56f-g2xml\" (UID: \"a902e816-a901-45c7-b806-bb25b1811c09\") " pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxtq\" (UniqueName: \"kubernetes.io/projected/9833f054-9348-44a7-80b8-2ecfa3a363ce-kube-api-access-lsxtq\") pod \"kube-storage-version-migrator-operator-b67b599dd-hkdtl\" (UID: \"9833f054-9348-44a7-80b8-2ecfa3a363ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070520 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-csi-data-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070541 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhb7z\" (UniqueName: \"kubernetes.io/projected/ba27f2a6-9dbe-4e68-a600-ca81e501b8f4-kube-api-access-nhb7z\") pod \"machine-config-server-hvkhf\" (UID: \"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4\") " pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070584 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/72ec5fd6-473e-497f-b946-4bed1cd7a594-profile-collector-cert\") pod \"olm-operator-6b444d44fb-crptw\" (UID: \"72ec5fd6-473e-497f-b946-4bed1cd7a594\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070649 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tjj\" (UniqueName: \"kubernetes.io/projected/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-kube-api-access-66tjj\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070683 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ba27f2a6-9dbe-4e68-a600-ca81e501b8f4-certs\") pod \"machine-config-server-hvkhf\" (UID: \"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4\") " pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070749 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-plugins-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070778 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4118aad-5782-4909-a5df-28f0f772ef10-config-volume\") pod \"collect-profiles-29421645-4bbwz\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hm8t\" (UniqueName: \"kubernetes.io/projected/a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c-kube-api-access-6hm8t\") pod \"catalog-operator-68c6474976-2jncz\" (UID: \"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070837 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6krp\" (UniqueName: \"kubernetes.io/projected/044a407a-76c8-49bc-8d24-040de15c1b88-kube-api-access-g6krp\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qdc9\" (UID: \"044a407a-76c8-49bc-8d24-040de15c1b88\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.070949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tpxq\" (UniqueName: \"kubernetes.io/projected/b5d46501-ce60-405c-8906-6983a65670fb-kube-api-access-5tpxq\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071000 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txg6\" (UniqueName: \"kubernetes.io/projected/a4118aad-5782-4909-a5df-28f0f772ef10-kube-api-access-8txg6\") pod \"collect-profiles-29421645-4bbwz\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071024 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872dr\" (UniqueName: \"kubernetes.io/projected/7b988e02-a4b1-4b17-a197-db4b5ad46e99-kube-api-access-872dr\") pod \"ingress-canary-xl5jc\" (UID: \"7b988e02-a4b1-4b17-a197-db4b5ad46e99\") " pod="openshift-ingress-canary/ingress-canary-xl5jc" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071064 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlb5\" (UniqueName: \"kubernetes.io/projected/06e2e19c-ec79-45f5-bc19-cfb6d587c6a0-kube-api-access-rtlb5\") pod \"migrator-59844c95c7-kzd9d\" (UID: \"06e2e19c-ec79-45f5-bc19-cfb6d587c6a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071091 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c-profile-collector-cert\") pod \"catalog-operator-68c6474976-2jncz\" (UID: \"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071112 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hggcd\" (UniqueName: \"kubernetes.io/projected/42c79cc6-b6ae-4e95-b078-c3341d0c6f7a-kube-api-access-hggcd\") pod \"service-ca-operator-777779d784-gv85n\" (UID: \"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071144 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b988e02-a4b1-4b17-a197-db4b5ad46e99-cert\") pod \"ingress-canary-xl5jc\" (UID: \"7b988e02-a4b1-4b17-a197-db4b5ad46e99\") " pod="openshift-ingress-canary/ingress-canary-xl5jc" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071173 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071239 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9833f054-9348-44a7-80b8-2ecfa3a363ce-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hkdtl\" (UID: \"9833f054-9348-44a7-80b8-2ecfa3a363ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071263 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3c3fd371-664b-41dc-8a29-6295563c0523-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qm8cq\" (UID: \"3c3fd371-664b-41dc-8a29-6295563c0523\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-socket-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071312 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdc5h\" (UniqueName: \"kubernetes.io/projected/e193fa59-1518-4852-9801-29fd9d4afcb9-kube-api-access-hdc5h\") pod \"package-server-manager-789f6589d5-9frbh\" (UID: \"e193fa59-1518-4852-9801-29fd9d4afcb9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.071473 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72ec5fd6-473e-497f-b946-4bed1cd7a594-srv-cert\") pod \"olm-operator-6b444d44fb-crptw\" (UID: \"72ec5fd6-473e-497f-b946-4bed1cd7a594\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: E1209 16:58:40.072990 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:40.572968142 +0000 UTC m=+147.507707504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.073784 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c79cc6-b6ae-4e95-b078-c3341d0c6f7a-serving-cert\") pod \"service-ca-operator-777779d784-gv85n\" (UID: \"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.073826 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-apiservice-cert\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.073928 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ba27f2a6-9dbe-4e68-a600-ca81e501b8f4-node-bootstrap-token\") pod \"machine-config-server-hvkhf\" (UID: \"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4\") " pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.074011 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/044a407a-76c8-49bc-8d24-040de15c1b88-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qdc9\" (UID: \"044a407a-76c8-49bc-8d24-040de15c1b88\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.074189 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-mountpoint-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.075702 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmfxz\" (UniqueName: \"kubernetes.io/projected/72ec5fd6-473e-497f-b946-4bed1cd7a594-kube-api-access-cmfxz\") pod \"olm-operator-6b444d44fb-crptw\" (UID: \"72ec5fd6-473e-497f-b946-4bed1cd7a594\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.075770 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-registration-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.076264 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4118aad-5782-4909-a5df-28f0f772ef10-secret-volume\") pod \"collect-profiles-29421645-4bbwz\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.076830 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e193fa59-1518-4852-9801-29fd9d4afcb9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9frbh\" (UID: \"e193fa59-1518-4852-9801-29fd9d4afcb9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.076918 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cp6\" (UniqueName: \"kubernetes.io/projected/3c3fd371-664b-41dc-8a29-6295563c0523-kube-api-access-64cp6\") pod \"multus-admission-controller-857f4d67dd-qm8cq\" (UID: \"3c3fd371-664b-41dc-8a29-6295563c0523\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.077121 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-tmpfs\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.077422 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a902e816-a901-45c7-b806-bb25b1811c09-signing-cabundle\") pod \"service-ca-9c57cc56f-g2xml\" (UID: \"a902e816-a901-45c7-b806-bb25b1811c09\") " pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.077465 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a902e816-a901-45c7-b806-bb25b1811c09-signing-key\") pod \"service-ca-9c57cc56f-g2xml\" (UID: \"a902e816-a901-45c7-b806-bb25b1811c09\") " pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.077580 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c-srv-cert\") pod \"catalog-operator-68c6474976-2jncz\" (UID: \"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.083372 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfbe3f08-01f6-419f-9f29-fcf674b02167-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-smchn\" (UID: \"dfbe3f08-01f6-419f-9f29-fcf674b02167\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.095791 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cf0e66b-9edb-4648-b675-1ffa480ec186-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7f2qt\" (UID: \"0cf0e66b-9edb-4648-b675-1ffa480ec186\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.123588 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-tls\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.139528 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-bound-sa-token\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.139801 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf0e66b-9edb-4648-b675-1ffa480ec186-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7f2qt\" (UID: \"0cf0e66b-9edb-4648-b675-1ffa480ec186\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.145054 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vb8w\" (UniqueName: \"kubernetes.io/projected/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-kube-api-access-9vb8w\") pod \"marketplace-operator-79b997595-cw4kq\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.147269 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-proxy-tls\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.175935 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ccm5l"] Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.176816 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrpf4\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-kube-api-access-vrpf4\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.178796 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179061 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c79cc6-b6ae-4e95-b078-c3341d0c6f7a-config\") pod \"service-ca-operator-777779d784-gv85n\" (UID: \"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179085 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9833f054-9348-44a7-80b8-2ecfa3a363ce-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hkdtl\" (UID: \"9833f054-9348-44a7-80b8-2ecfa3a363ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-webhook-cert\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179145 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6zmm\" (UniqueName: \"kubernetes.io/projected/a902e816-a901-45c7-b806-bb25b1811c09-kube-api-access-g6zmm\") pod \"service-ca-9c57cc56f-g2xml\" (UID: \"a902e816-a901-45c7-b806-bb25b1811c09\") " pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179164 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsxtq\" (UniqueName: \"kubernetes.io/projected/9833f054-9348-44a7-80b8-2ecfa3a363ce-kube-api-access-lsxtq\") pod \"kube-storage-version-migrator-operator-b67b599dd-hkdtl\" (UID: \"9833f054-9348-44a7-80b8-2ecfa3a363ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179181 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-csi-data-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhb7z\" (UniqueName: \"kubernetes.io/projected/ba27f2a6-9dbe-4e68-a600-ca81e501b8f4-kube-api-access-nhb7z\") pod \"machine-config-server-hvkhf\" (UID: \"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4\") " pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179213 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/72ec5fd6-473e-497f-b946-4bed1cd7a594-profile-collector-cert\") pod \"olm-operator-6b444d44fb-crptw\" (UID: \"72ec5fd6-473e-497f-b946-4bed1cd7a594\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179231 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tjj\" (UniqueName: \"kubernetes.io/projected/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-kube-api-access-66tjj\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179252 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ba27f2a6-9dbe-4e68-a600-ca81e501b8f4-certs\") pod \"machine-config-server-hvkhf\" (UID: \"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4\") " pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179269 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-plugins-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179283 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4118aad-5782-4909-a5df-28f0f772ef10-config-volume\") pod \"collect-profiles-29421645-4bbwz\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179298 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hm8t\" (UniqueName: \"kubernetes.io/projected/a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c-kube-api-access-6hm8t\") pod \"catalog-operator-68c6474976-2jncz\" (UID: \"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179314 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6krp\" (UniqueName: \"kubernetes.io/projected/044a407a-76c8-49bc-8d24-040de15c1b88-kube-api-access-g6krp\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qdc9\" (UID: \"044a407a-76c8-49bc-8d24-040de15c1b88\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tpxq\" (UniqueName: \"kubernetes.io/projected/b5d46501-ce60-405c-8906-6983a65670fb-kube-api-access-5tpxq\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179350 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txg6\" (UniqueName: \"kubernetes.io/projected/a4118aad-5782-4909-a5df-28f0f772ef10-kube-api-access-8txg6\") pod \"collect-profiles-29421645-4bbwz\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179366 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872dr\" (UniqueName: \"kubernetes.io/projected/7b988e02-a4b1-4b17-a197-db4b5ad46e99-kube-api-access-872dr\") pod \"ingress-canary-xl5jc\" (UID: \"7b988e02-a4b1-4b17-a197-db4b5ad46e99\") " pod="openshift-ingress-canary/ingress-canary-xl5jc" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179384 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlb5\" (UniqueName: \"kubernetes.io/projected/06e2e19c-ec79-45f5-bc19-cfb6d587c6a0-kube-api-access-rtlb5\") pod \"migrator-59844c95c7-kzd9d\" (UID: \"06e2e19c-ec79-45f5-bc19-cfb6d587c6a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179403 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c-profile-collector-cert\") pod \"catalog-operator-68c6474976-2jncz\" (UID: \"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179418 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hggcd\" (UniqueName: \"kubernetes.io/projected/42c79cc6-b6ae-4e95-b078-c3341d0c6f7a-kube-api-access-hggcd\") pod \"service-ca-operator-777779d784-gv85n\" (UID: \"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179433 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b988e02-a4b1-4b17-a197-db4b5ad46e99-cert\") pod \"ingress-canary-xl5jc\" (UID: \"7b988e02-a4b1-4b17-a197-db4b5ad46e99\") " pod="openshift-ingress-canary/ingress-canary-xl5jc" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179458 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9833f054-9348-44a7-80b8-2ecfa3a363ce-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hkdtl\" (UID: \"9833f054-9348-44a7-80b8-2ecfa3a363ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179474 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3c3fd371-664b-41dc-8a29-6295563c0523-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qm8cq\" (UID: \"3c3fd371-664b-41dc-8a29-6295563c0523\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179491 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-socket-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179507 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdc5h\" (UniqueName: \"kubernetes.io/projected/e193fa59-1518-4852-9801-29fd9d4afcb9-kube-api-access-hdc5h\") pod \"package-server-manager-789f6589d5-9frbh\" (UID: \"e193fa59-1518-4852-9801-29fd9d4afcb9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179525 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72ec5fd6-473e-497f-b946-4bed1cd7a594-srv-cert\") pod \"olm-operator-6b444d44fb-crptw\" (UID: \"72ec5fd6-473e-497f-b946-4bed1cd7a594\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179543 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c79cc6-b6ae-4e95-b078-c3341d0c6f7a-serving-cert\") pod \"service-ca-operator-777779d784-gv85n\" (UID: \"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179558 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-apiservice-cert\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179576 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ba27f2a6-9dbe-4e68-a600-ca81e501b8f4-node-bootstrap-token\") pod \"machine-config-server-hvkhf\" (UID: \"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4\") " pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179613 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/044a407a-76c8-49bc-8d24-040de15c1b88-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qdc9\" (UID: \"044a407a-76c8-49bc-8d24-040de15c1b88\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179633 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-mountpoint-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179655 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmfxz\" (UniqueName: \"kubernetes.io/projected/72ec5fd6-473e-497f-b946-4bed1cd7a594-kube-api-access-cmfxz\") pod \"olm-operator-6b444d44fb-crptw\" (UID: \"72ec5fd6-473e-497f-b946-4bed1cd7a594\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-registration-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179689 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4118aad-5782-4909-a5df-28f0f772ef10-secret-volume\") pod \"collect-profiles-29421645-4bbwz\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179708 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e193fa59-1518-4852-9801-29fd9d4afcb9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9frbh\" (UID: \"e193fa59-1518-4852-9801-29fd9d4afcb9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179723 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64cp6\" (UniqueName: \"kubernetes.io/projected/3c3fd371-664b-41dc-8a29-6295563c0523-kube-api-access-64cp6\") pod \"multus-admission-controller-857f4d67dd-qm8cq\" (UID: \"3c3fd371-664b-41dc-8a29-6295563c0523\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179741 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-tmpfs\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a902e816-a901-45c7-b806-bb25b1811c09-signing-cabundle\") pod \"service-ca-9c57cc56f-g2xml\" (UID: \"a902e816-a901-45c7-b806-bb25b1811c09\") " pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179781 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a902e816-a901-45c7-b806-bb25b1811c09-signing-key\") pod \"service-ca-9c57cc56f-g2xml\" (UID: \"a902e816-a901-45c7-b806-bb25b1811c09\") " pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.179796 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c-srv-cert\") pod \"catalog-operator-68c6474976-2jncz\" (UID: \"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: E1209 16:58:40.186308 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:40.686277824 +0000 UTC m=+147.621017016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.187170 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c79cc6-b6ae-4e95-b078-c3341d0c6f7a-config\") pod \"service-ca-operator-777779d784-gv85n\" (UID: \"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.188117 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-socket-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.188145 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-csi-data-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.188616 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4118aad-5782-4909-a5df-28f0f772ef10-config-volume\") pod \"collect-profiles-29421645-4bbwz\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.188674 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9833f054-9348-44a7-80b8-2ecfa3a363ce-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hkdtl\" (UID: \"9833f054-9348-44a7-80b8-2ecfa3a363ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.188682 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-plugins-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.190751 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-mountpoint-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.200355 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a902e816-a901-45c7-b806-bb25b1811c09-signing-cabundle\") pod \"service-ca-9c57cc56f-g2xml\" (UID: \"a902e816-a901-45c7-b806-bb25b1811c09\") " pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.200659 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-webhook-cert\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.201483 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-tmpfs\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.201870 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/044a407a-76c8-49bc-8d24-040de15c1b88-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qdc9\" (UID: \"044a407a-76c8-49bc-8d24-040de15c1b88\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.202296 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c79cc6-b6ae-4e95-b078-c3341d0c6f7a-serving-cert\") pod \"service-ca-operator-777779d784-gv85n\" (UID: \"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.202449 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5d46501-ce60-405c-8906-6983a65670fb-registration-dir\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.202503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ba27f2a6-9dbe-4e68-a600-ca81e501b8f4-certs\") pod \"machine-config-server-hvkhf\" (UID: \"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4\") " pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.202553 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4118aad-5782-4909-a5df-28f0f772ef10-secret-volume\") pod \"collect-profiles-29421645-4bbwz\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.202646 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbnn\" (UniqueName: \"kubernetes.io/projected/c4dc629c-36aa-41e2-8daa-a4953c31ecfa-kube-api-access-sbbnn\") pod \"machine-config-operator-74547568cd-tmvrp\" (UID: \"c4dc629c-36aa-41e2-8daa-a4953c31ecfa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.202997 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.203179 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b988e02-a4b1-4b17-a197-db4b5ad46e99-cert\") pod \"ingress-canary-xl5jc\" (UID: \"7b988e02-a4b1-4b17-a197-db4b5ad46e99\") " pod="openshift-ingress-canary/ingress-canary-xl5jc" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.203361 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/72ec5fd6-473e-497f-b946-4bed1cd7a594-profile-collector-cert\") pod \"olm-operator-6b444d44fb-crptw\" (UID: \"72ec5fd6-473e-497f-b946-4bed1cd7a594\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.203445 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ba27f2a6-9dbe-4e68-a600-ca81e501b8f4-node-bootstrap-token\") pod \"machine-config-server-hvkhf\" (UID: \"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4\") " pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.205835 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3c3fd371-664b-41dc-8a29-6295563c0523-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qm8cq\" (UID: \"3c3fd371-664b-41dc-8a29-6295563c0523\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.206258 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-apiservice-cert\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.207396 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c-profile-collector-cert\") pod \"catalog-operator-68c6474976-2jncz\" (UID: \"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.207953 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9833f054-9348-44a7-80b8-2ecfa3a363ce-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hkdtl\" (UID: \"9833f054-9348-44a7-80b8-2ecfa3a363ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.208387 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e193fa59-1518-4852-9801-29fd9d4afcb9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9frbh\" (UID: \"e193fa59-1518-4852-9801-29fd9d4afcb9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.208675 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72ec5fd6-473e-497f-b946-4bed1cd7a594-srv-cert\") pod \"olm-operator-6b444d44fb-crptw\" (UID: \"72ec5fd6-473e-497f-b946-4bed1cd7a594\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.210777 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a902e816-a901-45c7-b806-bb25b1811c09-signing-key\") pod \"service-ca-9c57cc56f-g2xml\" (UID: \"a902e816-a901-45c7-b806-bb25b1811c09\") " pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.213492 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.213564 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" event={"ID":"098686c6-8100-46dd-ae99-4576faf0d50e","Type":"ContainerStarted","Data":"66252f299f0e77b588496e64f47f928e1a67a86b702a843d983a177286c79463"} Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.217340 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c-srv-cert\") pod \"catalog-operator-68c6474976-2jncz\" (UID: \"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.220106 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" event={"ID":"3d2c3869-682e-4c32-b325-afd38cd76667","Type":"ContainerStarted","Data":"81da3c2a0ba8115a2aec25a58e9020cca7c18f87462dd562e8a9e792d1ee6f34"} Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.224222 4853 generic.go:334] "Generic (PLEG): container finished" podID="b2b1f86c-b414-418d-973d-6db3442a1bd1" containerID="af3266dbe8a477aaf646236f0504e7f901d3c6f5f71a5698d9927320436e546a" exitCode=0 Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.224260 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" event={"ID":"b2b1f86c-b414-418d-973d-6db3442a1bd1","Type":"ContainerDied","Data":"af3266dbe8a477aaf646236f0504e7f901d3c6f5f71a5698d9927320436e546a"} Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.226639 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" event={"ID":"d424076c-9966-48fc-94c8-9932dccc8658","Type":"ContainerStarted","Data":"9177e2a9633f79b8b4678d8a1b681b9194ece767b18ba59facfdfba05a4f86b9"} Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.226686 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" event={"ID":"d424076c-9966-48fc-94c8-9932dccc8658","Type":"ContainerStarted","Data":"fa2fa387ec7012b1564477628e1dd7670401d2dce64a3036e01608bafe9c79b5"} Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.228469 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" event={"ID":"3bf38b39-12f9-48ea-81dc-1e39a057074a","Type":"ContainerDied","Data":"781631e94cd93a794efb6af90c915c82652c437e47938b5aad483d0a4bfee85e"} Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.228491 4853 generic.go:334] "Generic (PLEG): container finished" podID="3bf38b39-12f9-48ea-81dc-1e39a057074a" containerID="781631e94cd93a794efb6af90c915c82652c437e47938b5aad483d0a4bfee85e" exitCode=0 Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.228640 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" event={"ID":"3bf38b39-12f9-48ea-81dc-1e39a057074a","Type":"ContainerStarted","Data":"3fc8d3ebe4baad84263797b816961b572afe4581cafe7acb08a92afe9c02e27d"} Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.237383 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hm8t\" (UniqueName: \"kubernetes.io/projected/a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c-kube-api-access-6hm8t\") pod \"catalog-operator-68c6474976-2jncz\" (UID: \"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.239932 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" event={"ID":"7f4d7735-71ec-48b9-b4dc-017a983a2e2c","Type":"ContainerStarted","Data":"a5a4bd22eff497fbd250f3bbce8ec254c64891a45b192d774c28e3f082c7d101"} Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.239959 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.240733 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.241132 4853 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7sm7j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.241164 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" podUID="2bd00d6d-d30c-49c5-aa61-3392f0b29a86" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.241984 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6krp\" (UniqueName: \"kubernetes.io/projected/044a407a-76c8-49bc-8d24-040de15c1b88-kube-api-access-g6krp\") pod \"control-plane-machine-set-operator-78cbb6b69f-8qdc9\" (UID: \"044a407a-76c8-49bc-8d24-040de15c1b88\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.278122 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.281369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:40 crc kubenswrapper[4853]: E1209 16:58:40.281799 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:40.781784251 +0000 UTC m=+147.716523433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.294555 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.311997 4853 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5vb7d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.312041 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" podUID="7f4d7735-71ec-48b9-b4dc-017a983a2e2c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.312115 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qvjb6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.312129 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" podUID="5ac21b02-cdf3-4f92-8f7b-898015277e7a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.316467 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.364807 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.382777 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:40 crc kubenswrapper[4853]: E1209 16:58:40.388621 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:40.888580547 +0000 UTC m=+147.823319739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.403568 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txg6\" (UniqueName: \"kubernetes.io/projected/a4118aad-5782-4909-a5df-28f0f772ef10-kube-api-access-8txg6\") pod \"collect-profiles-29421645-4bbwz\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.404171 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhb7z\" (UniqueName: \"kubernetes.io/projected/ba27f2a6-9dbe-4e68-a600-ca81e501b8f4-kube-api-access-nhb7z\") pod \"machine-config-server-hvkhf\" (UID: \"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4\") " pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.405659 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872dr\" (UniqueName: \"kubernetes.io/projected/7b988e02-a4b1-4b17-a197-db4b5ad46e99-kube-api-access-872dr\") pod \"ingress-canary-xl5jc\" (UID: \"7b988e02-a4b1-4b17-a197-db4b5ad46e99\") " pod="openshift-ingress-canary/ingress-canary-xl5jc" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.406052 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tpxq\" (UniqueName: \"kubernetes.io/projected/b5d46501-ce60-405c-8906-6983a65670fb-kube-api-access-5tpxq\") pod \"csi-hostpathplugin-w78wr\" (UID: \"b5d46501-ce60-405c-8906-6983a65670fb\") " pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.407585 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hggcd\" (UniqueName: \"kubernetes.io/projected/42c79cc6-b6ae-4e95-b078-c3341d0c6f7a-kube-api-access-hggcd\") pod \"service-ca-operator-777779d784-gv85n\" (UID: \"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.410809 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlb5\" (UniqueName: \"kubernetes.io/projected/06e2e19c-ec79-45f5-bc19-cfb6d587c6a0-kube-api-access-rtlb5\") pod \"migrator-59844c95c7-kzd9d\" (UID: \"06e2e19c-ec79-45f5-bc19-cfb6d587c6a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.414083 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tjj\" (UniqueName: \"kubernetes.io/projected/b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71-kube-api-access-66tjj\") pod \"packageserver-d55dfcdfc-lvvxg\" (UID: \"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.414310 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6zmm\" (UniqueName: \"kubernetes.io/projected/a902e816-a901-45c7-b806-bb25b1811c09-kube-api-access-g6zmm\") pod \"service-ca-9c57cc56f-g2xml\" (UID: \"a902e816-a901-45c7-b806-bb25b1811c09\") " pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.414585 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.428441 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.432253 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hvkhf" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.435860 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xl5jc" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.525196 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:40 crc kubenswrapper[4853]: E1209 16:58:40.526350 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.026272584 +0000 UTC m=+147.961011966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.529250 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmfxz\" (UniqueName: \"kubernetes.io/projected/72ec5fd6-473e-497f-b946-4bed1cd7a594-kube-api-access-cmfxz\") pod \"olm-operator-6b444d44fb-crptw\" (UID: \"72ec5fd6-473e-497f-b946-4bed1cd7a594\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.541203 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdc5h\" (UniqueName: \"kubernetes.io/projected/e193fa59-1518-4852-9801-29fd9d4afcb9-kube-api-access-hdc5h\") pod \"package-server-manager-789f6589d5-9frbh\" (UID: \"e193fa59-1518-4852-9801-29fd9d4afcb9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.557201 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsxtq\" (UniqueName: \"kubernetes.io/projected/9833f054-9348-44a7-80b8-2ecfa3a363ce-kube-api-access-lsxtq\") pod \"kube-storage-version-migrator-operator-b67b599dd-hkdtl\" (UID: \"9833f054-9348-44a7-80b8-2ecfa3a363ce\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.557275 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64cp6\" (UniqueName: \"kubernetes.io/projected/3c3fd371-664b-41dc-8a29-6295563c0523-kube-api-access-64cp6\") pod \"multus-admission-controller-857f4d67dd-qm8cq\" (UID: \"3c3fd371-664b-41dc-8a29-6295563c0523\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.622072 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" podStartSLOduration=125.62205068 podStartE2EDuration="2m5.62205068s" podCreationTimestamp="2025-12-09 16:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:40.619278672 +0000 UTC m=+147.554017854" watchObservedRunningTime="2025-12-09 16:58:40.62205068 +0000 UTC m=+147.556789862" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.624016 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.627942 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:40 crc kubenswrapper[4853]: E1209 16:58:40.628221 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.128205006 +0000 UTC m=+148.062944188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.632412 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.661961 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.662038 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.669446 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.676011 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.701013 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w78wr" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.707975 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.731971 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:40 crc kubenswrapper[4853]: E1209 16:58:40.732277 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.232264454 +0000 UTC m=+148.167003636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.838329 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:40 crc kubenswrapper[4853]: E1209 16:58:40.838619 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.338588535 +0000 UTC m=+148.273327717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:40 crc kubenswrapper[4853]: I1209 16:58:40.961950 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:40 crc kubenswrapper[4853]: E1209 16:58:40.962632 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.462583217 +0000 UTC m=+148.397322399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.063627 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.064076 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.564043603 +0000 UTC m=+148.498782785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.221937 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.222277 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.722262904 +0000 UTC m=+148.657002086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.296659 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" event={"ID":"3d2c3869-682e-4c32-b325-afd38cd76667","Type":"ContainerStarted","Data":"cbb260faa25994760b367cde0d66779f5a11809a3a09b93754ae49b95b52f3bf"} Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.311009 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hvkhf" event={"ID":"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4","Type":"ContainerStarted","Data":"83739c45234f83083ae730f2e0929dd4a6028a807fb6c1136a7cbbb19d2babd8"} Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.311990 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ccm5l" event={"ID":"5b33f6ba-88ff-4fd0-876d-871cf36db1cf","Type":"ContainerStarted","Data":"fa468e4d7ebd20494094c4a7695d3c8f695d8919d18fc0454cf68713c0740bc9"} Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.313035 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" event={"ID":"098686c6-8100-46dd-ae99-4576faf0d50e","Type":"ContainerStarted","Data":"1bc9a3afe7d8255534836b7d0647f38c0f2dd72cda1c033fc134150a6f5f298c"} Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.313782 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ffzns" event={"ID":"d8892469-e13f-4dcf-ab96-106be91ab901","Type":"ContainerStarted","Data":"5bd4924b2fdcdfb14cd70313768cd9500532b7c675d01cdb8003f9dcdef381fd"} Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.320041 4853 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5vb7d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.320093 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" podUID="7f4d7735-71ec-48b9-b4dc-017a983a2e2c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.320534 4853 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7sm7j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.320556 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" podUID="2bd00d6d-d30c-49c5-aa61-3392f0b29a86" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.320798 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qvjb6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.320819 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" podUID="5ac21b02-cdf3-4f92-8f7b-898015277e7a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.324683 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.324823 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.824801685 +0000 UTC m=+148.759540867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.324902 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.325206 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.825199497 +0000 UTC m=+148.759938669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.430665 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.430827 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.930801784 +0000 UTC m=+148.865540966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.431292 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.434513 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:41.934488952 +0000 UTC m=+148.869228314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.535236 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.535660 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:42.035642228 +0000 UTC m=+148.970381410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.637651 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.638172 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:42.138140576 +0000 UTC m=+149.072879758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.744514 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.744991 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:42.244971024 +0000 UTC m=+149.179710206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.849208 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.849541 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:42.349526118 +0000 UTC m=+149.284265300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.903869 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" podStartSLOduration=125.903853315 podStartE2EDuration="2m5.903853315s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:41.900798588 +0000 UTC m=+148.835537770" watchObservedRunningTime="2025-12-09 16:58:41.903853315 +0000 UTC m=+148.838592497" Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.941014 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" podStartSLOduration=125.940992426 podStartE2EDuration="2m5.940992426s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:41.924532503 +0000 UTC m=+148.859271695" watchObservedRunningTime="2025-12-09 16:58:41.940992426 +0000 UTC m=+148.875731608" Dec 09 16:58:41 crc kubenswrapper[4853]: I1209 16:58:41.960683 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:41 crc kubenswrapper[4853]: E1209 16:58:41.960872 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:42.460847088 +0000 UTC m=+149.395586270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.152356 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:42 crc kubenswrapper[4853]: E1209 16:58:42.152730 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:42.652713408 +0000 UTC m=+149.587452590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.262247 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:42 crc kubenswrapper[4853]: E1209 16:58:42.262536 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:42.762521449 +0000 UTC m=+149.697260621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.364566 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:42 crc kubenswrapper[4853]: E1209 16:58:42.364889 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:42.864873334 +0000 UTC m=+149.799612516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.437705 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ffzns" event={"ID":"d8892469-e13f-4dcf-ab96-106be91ab901","Type":"ContainerStarted","Data":"aca2411e0e728ab9387c39544d9617cecc6fd09669da076776ae6a7e93f1ffd5"} Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.447307 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" event={"ID":"d424076c-9966-48fc-94c8-9932dccc8658","Type":"ContainerStarted","Data":"d17b6b921c6f7fbc622cbc5d9c71c6eeba94867ddd9cd3691f3ab7ff1bfbb69d"} Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.485056 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:42 crc kubenswrapper[4853]: E1209 16:58:42.486123 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:42.986103778 +0000 UTC m=+149.920842960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.514015 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" event={"ID":"b2b1f86c-b414-418d-973d-6db3442a1bd1","Type":"ContainerStarted","Data":"460ec442625e8693f793385169666e9fd10f3c62ada6ebb6b20c8425b03c6761"} Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.515854 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" event={"ID":"3bf38b39-12f9-48ea-81dc-1e39a057074a","Type":"ContainerStarted","Data":"11a4598f9f93cd3ccc28ed76b5a9d4adc38dc854b368dd2a635fda52c898a279"} Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.516628 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.517995 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hvkhf" event={"ID":"ba27f2a6-9dbe-4e68-a600-ca81e501b8f4","Type":"ContainerStarted","Data":"ab14c7d0ec08ff435d5baa836e3654a9976a048812d63d73476152ad311e2939"} Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.522794 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zgd7r" podStartSLOduration=126.522778344 podStartE2EDuration="2m6.522778344s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:42.457649593 +0000 UTC m=+149.392388775" watchObservedRunningTime="2025-12-09 16:58:42.522778344 +0000 UTC m=+149.457517526" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.523860 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ccm5l" event={"ID":"5b33f6ba-88ff-4fd0-876d-871cf36db1cf","Type":"ContainerStarted","Data":"a9f1ae16f8632403d69a023c4ea7247ea69c1888648dc48d043d3baf52964970"} Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.524683 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ccm5l" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.528666 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" event={"ID":"098686c6-8100-46dd-ae99-4576faf0d50e","Type":"ContainerStarted","Data":"de47d7b95cccc76f6785557bec690817ff0a4ebebdb7e02dd020069acb183e62"} Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.574008 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hvkhf" podStartSLOduration=5.573994752 podStartE2EDuration="5.573994752s" podCreationTimestamp="2025-12-09 16:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:42.571471643 +0000 UTC m=+149.506210825" watchObservedRunningTime="2025-12-09 16:58:42.573994752 +0000 UTC m=+149.508733934" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.602731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:42 crc kubenswrapper[4853]: E1209 16:58:42.603029 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:43.103017635 +0000 UTC m=+150.037756817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.677312 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" podStartSLOduration=126.677293547 podStartE2EDuration="2m6.677293547s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:42.61008644 +0000 UTC m=+149.544825622" watchObservedRunningTime="2025-12-09 16:58:42.677293547 +0000 UTC m=+149.612032729" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.693983 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-ccm5l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.694019 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ccm5l" podUID="5b33f6ba-88ff-4fd0-876d-871cf36db1cf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.703289 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:42 crc kubenswrapper[4853]: E1209 16:58:42.704401 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:43.204383678 +0000 UTC m=+150.139122860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.796413 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9blst" podStartSLOduration=127.796379733 podStartE2EDuration="2m7.796379733s" podCreationTimestamp="2025-12-09 16:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:42.795092422 +0000 UTC m=+149.729831604" watchObservedRunningTime="2025-12-09 16:58:42.796379733 +0000 UTC m=+149.731118915" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.796576 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sbmm" podStartSLOduration=127.79657303 podStartE2EDuration="2m7.79657303s" podCreationTimestamp="2025-12-09 16:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:42.682003057 +0000 UTC m=+149.616742239" watchObservedRunningTime="2025-12-09 16:58:42.79657303 +0000 UTC m=+149.731312212" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.845709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:42 crc kubenswrapper[4853]: E1209 16:58:42.846054 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:43.346040913 +0000 UTC m=+150.280780095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.915911 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ffzns" podStartSLOduration=126.915893083 podStartE2EDuration="2m6.915893083s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:42.91486035 +0000 UTC m=+149.849599532" watchObservedRunningTime="2025-12-09 16:58:42.915893083 +0000 UTC m=+149.850632255" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.930319 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nqxhf" podStartSLOduration=127.930296032 podStartE2EDuration="2m7.930296032s" podCreationTimestamp="2025-12-09 16:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:42.928634308 +0000 UTC m=+149.863373490" watchObservedRunningTime="2025-12-09 16:58:42.930296032 +0000 UTC m=+149.865035214" Dec 09 16:58:42 crc kubenswrapper[4853]: I1209 16:58:42.946372 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:42 crc kubenswrapper[4853]: E1209 16:58:42.946641 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:43.44662658 +0000 UTC m=+150.381365762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.023625 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.047408 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.047863 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:43.547845779 +0000 UTC m=+150.482584961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.149028 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.149225 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:43.649188671 +0000 UTC m=+150.583927853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.151003 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.151099 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:43.651089381 +0000 UTC m=+150.585828563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.272763 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.274326 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:43.774310059 +0000 UTC m=+150.709049241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.416159 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.416459 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:43.916441118 +0000 UTC m=+150.851180300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.517256 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.517455 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.017409368 +0000 UTC m=+150.952148550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.517502 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.517954 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.017939875 +0000 UTC m=+150.952679057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.622025 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.622708 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.122684556 +0000 UTC m=+151.057423748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.630805 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" event={"ID":"b2b1f86c-b414-418d-973d-6db3442a1bd1","Type":"ContainerStarted","Data":"f618a524408ec8ccede7993bb1b99dff61b5e01745a1657087e663b62193fce9"} Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.642379 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-ccm5l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.642661 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ccm5l" podUID="5b33f6ba-88ff-4fd0-876d-871cf36db1cf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.698473 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" podStartSLOduration=128.698453465 podStartE2EDuration="2m8.698453465s" podCreationTimestamp="2025-12-09 16:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:43.696778401 +0000 UTC m=+150.631517593" watchObservedRunningTime="2025-12-09 16:58:43.698453465 +0000 UTC m=+150.633192647" Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.699747 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ccm5l" podStartSLOduration=127.699737856 podStartE2EDuration="2m7.699737856s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:42.944647588 +0000 UTC m=+149.879386770" watchObservedRunningTime="2025-12-09 16:58:43.699737856 +0000 UTC m=+150.634477038" Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.741668 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.743106 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.243088284 +0000 UTC m=+151.177827476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.841068 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:43 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:43 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:43 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.841123 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.846820 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.846989 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.346968087 +0000 UTC m=+151.281707269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.847239 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.847577 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.347569416 +0000 UTC m=+151.282308598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.948510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.948760 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.448728262 +0000 UTC m=+151.383467444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:43 crc kubenswrapper[4853]: I1209 16:58:43.948878 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:43 crc kubenswrapper[4853]: E1209 16:58:43.949233 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.449220548 +0000 UTC m=+151.383959730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.050612 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:44 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:44 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:44 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.050683 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.050818 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.052096 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.552078598 +0000 UTC m=+151.486817780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.052143 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.052387 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.552377758 +0000 UTC m=+151.487116940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.153241 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.153628 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.653611086 +0000 UTC m=+151.588350268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.265553 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.265954 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.765937988 +0000 UTC m=+151.700677170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.364887 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xp79b"] Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.374245 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.374515 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.87449949 +0000 UTC m=+151.809238672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: W1209 16:58:44.432767 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbe2e3a_9876_4cd2_ba0a_5a3d4fce7e74.slice/crio-c19f5fdf953aafdddd23a76e502b39cd1476c1dfa6f9a3d41e7abf8a74d6eac4 WatchSource:0}: Error finding container c19f5fdf953aafdddd23a76e502b39cd1476c1dfa6f9a3d41e7abf8a74d6eac4: Status 404 returned error can't find the container with id c19f5fdf953aafdddd23a76e502b39cd1476c1dfa6f9a3d41e7abf8a74d6eac4 Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.440263 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk"] Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.469144 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5"] Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.475232 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.475692 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:44.975676377 +0000 UTC m=+151.910415559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.560794 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4bfgv"] Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.576092 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.576364 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:45.076345387 +0000 UTC m=+152.011084569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.651052 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn"] Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.655054 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" event={"ID":"8deaa879-6b9f-4b7b-8cef-f48461d13c5f","Type":"ContainerStarted","Data":"deb81bcdf6a00f3b1566c116582f6d163eb23cc3547048dd3ece81594d4654d8"} Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.683348 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.683870 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:45.183858187 +0000 UTC m=+152.118597369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.684013 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv"] Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.694096 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7"] Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.714514 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xp79b" event={"ID":"71b908be-495e-4eb2-8429-56c89e4344f4","Type":"ContainerStarted","Data":"8df69ff4da87d20a0c42064f17ffe81c18061ff55a09e33b6db972b38e924f41"} Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.742873 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" event={"ID":"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74","Type":"ContainerStarted","Data":"c19f5fdf953aafdddd23a76e502b39cd1476c1dfa6f9a3d41e7abf8a74d6eac4"} Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.773190 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" event={"ID":"9a91e971-b137-461e-b373-9b44ded89e8e","Type":"ContainerStarted","Data":"59dbba439dcfe08c427a228b94ff2bd799137f32bf74b6e91ac15d948603ad3b"} Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.773706 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-ccm5l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.773793 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ccm5l" podUID="5b33f6ba-88ff-4fd0-876d-871cf36db1cf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.777554 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg"] Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.786545 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.786876 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:45.286861501 +0000 UTC m=+152.221600683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.798958 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dq8p9" Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.895732 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:44 crc kubenswrapper[4853]: E1209 16:58:44.898948 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:45.398924454 +0000 UTC m=+152.333663636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:44 crc kubenswrapper[4853]: I1209 16:58:44.904437 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jk2pg"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:44.997226 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:44.997682 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:45.497661044 +0000 UTC m=+152.432400226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.038793 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:45 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:45 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:45 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.038854 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.099067 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.099368 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:45.599353707 +0000 UTC m=+152.534092889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.157655 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.193540 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.195423 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qm8cq"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.200152 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.200415 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:45.70040031 +0000 UTC m=+152.635139492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.233967 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.237023 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz"] Dec 09 16:58:45 crc kubenswrapper[4853]: W1209 16:58:45.237460 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42d1afc3_f774_4b6c_8dc5_ca20727f4203.slice/crio-9bb377de61baff20b63ff9785d3ae86f580b274360d76b5d2af96cd96ca9fe78 WatchSource:0}: Error finding container 9bb377de61baff20b63ff9785d3ae86f580b274360d76b5d2af96cd96ca9fe78: Status 404 returned error can't find the container with id 9bb377de61baff20b63ff9785d3ae86f580b274360d76b5d2af96cd96ca9fe78 Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.243895 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5grrn"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.251679 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xl5jc"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.258626 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cw4kq"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.278249 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.288681 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.296531 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.303013 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.303327 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:45.803312432 +0000 UTC m=+152.738051604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.315193 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w78wr"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.344658 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.347016 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g2xml"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.367060 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.369101 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6cmx8"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.391465 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.409900 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.410134 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:45.910109117 +0000 UTC m=+152.844848299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.412444 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.477873 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gv85n"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.479847 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh"] Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.511255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.511637 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.011623325 +0000 UTC m=+152.946362507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.613231 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.613547 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.113519164 +0000 UTC m=+153.048258346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.616951 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.617363 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.117350627 +0000 UTC m=+153.052089809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.717799 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.717956 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.217934255 +0000 UTC m=+153.152673437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.718204 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.718508 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.218496483 +0000 UTC m=+153.153235665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.796948 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" event={"ID":"3190e757-5290-4ff7-93ba-56703960ed28","Type":"ContainerStarted","Data":"e07f1438377c1d2548549ae46c782d05d5505f3ff194252314ee898c27d4cca0"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.796988 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" event={"ID":"3190e757-5290-4ff7-93ba-56703960ed28","Type":"ContainerStarted","Data":"3fa0807267c04874255d6c881a61919fd4d7e94e5d21771c5b103ace8cd404e8"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.796998 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" event={"ID":"3190e757-5290-4ff7-93ba-56703960ed28","Type":"ContainerStarted","Data":"fe5544cd49a12598237a41c065f0f68001542bcc0468608e3b3957e0f312b56b"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.808557 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" event={"ID":"e193fa59-1518-4852-9801-29fd9d4afcb9","Type":"ContainerStarted","Data":"a90ab2a846f7c526bb5674d18f590a4656d8c5113302b7e690b6c5eb95518c56"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.809711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" event={"ID":"7e25b771-f015-46fc-ad94-e8b9aa6b49cb","Type":"ContainerStarted","Data":"fc2864a85e47457d1931309f652c210f90466a8190a7d42e4edb280b9e891b13"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.866567 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lbnv" podStartSLOduration=129.86654498 podStartE2EDuration="2m9.86654498s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:45.866071645 +0000 UTC m=+152.800810827" watchObservedRunningTime="2025-12-09 16:58:45.86654498 +0000 UTC m=+152.801284162" Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.868062 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.868211 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.368180761 +0000 UTC m=+153.302919943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.868430 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.869541 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.369527095 +0000 UTC m=+153.304266277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.871337 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d" event={"ID":"06e2e19c-ec79-45f5-bc19-cfb6d587c6a0","Type":"ContainerStarted","Data":"bcf883c5ebe033f515346779edf247d8e530c0532d5320d477a6eef022168f3f"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.883207 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" event={"ID":"37f050ab-b03b-4082-b819-e1bef9642e87","Type":"ContainerStarted","Data":"d1f09f334a95f4830d165532dd5cd5231991cb640c897850e722a2faa1bc3dde"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.883264 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" event={"ID":"37f050ab-b03b-4082-b819-e1bef9642e87","Type":"ContainerStarted","Data":"60b64f3029ece972394a60a7c0d72157f353cc5676bfd2226d16e7bb7c4d691d"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.892073 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" event={"ID":"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f","Type":"ContainerStarted","Data":"7e26cca5c6cdbdb607fcce65954639a2a50076e582c5e9646fbb592fa0fcbfae"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.892837 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" event={"ID":"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a","Type":"ContainerStarted","Data":"482f3b82354b4000828aebeeb6265f459d41130c82fd1d1e7a0cbe524d2f166b"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.893348 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" event={"ID":"9833f054-9348-44a7-80b8-2ecfa3a363ce","Type":"ContainerStarted","Data":"06fca84fd685bc0027858e73406343419d07fa5004d4ef81c0e55235e8542f66"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.894742 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" event={"ID":"a902e816-a901-45c7-b806-bb25b1811c09","Type":"ContainerStarted","Data":"b3889d1ff46facfba029abd70517a0cc75f8225ae17699d17348ad44c40acf7f"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.895765 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2f6b8214b4bf0b7b82e009e92fa73e99b1649635b24fc2ba8cd657e733598a2b"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.896944 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jk2pg" event={"ID":"e1112df0-82ff-460a-b9c3-e3bc662862d1","Type":"ContainerStarted","Data":"6471b142ff66a6befa6b3efe58c25aac24aa0bd5dfc6aa541e7c5e57063a89a8"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.896976 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jk2pg" event={"ID":"e1112df0-82ff-460a-b9c3-e3bc662862d1","Type":"ContainerStarted","Data":"1de1a14eed30075db4c1cb1665b6edb2c839401304944a9d63660d9caef697d8"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.899104 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.901540 4853 patch_prober.go:28] interesting pod/console-operator-58897d9998-jk2pg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.901574 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jk2pg" podUID="e1112df0-82ff-460a-b9c3-e3bc662862d1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.944752 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z8mb7" podStartSLOduration=129.944738656 podStartE2EDuration="2m9.944738656s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:45.942505155 +0000 UTC m=+152.877244337" watchObservedRunningTime="2025-12-09 16:58:45.944738656 +0000 UTC m=+152.879477838" Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.957146 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5grrn" event={"ID":"7585c230-8db6-45bd-bd39-17d27ff826dd","Type":"ContainerStarted","Data":"4d52d9d016654e3df619fc296bc196b9a69dc6ed54ee7d75e40db42c5db3d174"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.974037 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:45 crc kubenswrapper[4853]: E1209 16:58:45.976193 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.476167815 +0000 UTC m=+153.410907037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.987926 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"aa7901573a64083b26835bfa3169f4882197244e6e1ca6168addb58a2d455b77"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.987985 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"97294f5094415995c87f1349e4c1af08d9141565ac37ac1e3fa91f33124c2855"} Dec 09 16:58:45 crc kubenswrapper[4853]: I1209 16:58:45.988655 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.022806 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w78wr" event={"ID":"b5d46501-ce60-405c-8906-6983a65670fb","Type":"ContainerStarted","Data":"8a4a1a154a973a6bc8c54b3678fbd3fd42fc78955958d61f9a1689f9f5b78856"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.034475 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:46 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:46 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:46 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.034521 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.035190 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" event={"ID":"42d1afc3-f774-4b6c-8dc5-ca20727f4203","Type":"ContainerStarted","Data":"9bb377de61baff20b63ff9785d3ae86f580b274360d76b5d2af96cd96ca9fe78"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.057692 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jk2pg" podStartSLOduration=130.057673286 podStartE2EDuration="2m10.057673286s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:46.012083877 +0000 UTC m=+152.946823049" watchObservedRunningTime="2025-12-09 16:58:46.057673286 +0000 UTC m=+152.992412468" Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.074265 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" event={"ID":"f8f3c87c-9080-4011-97f8-2d04ddeee5f6","Type":"ContainerStarted","Data":"298d635b022f2cec26dd6c43542d79f099c5a7f6122c9e7120b82e2b1981328f"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.074314 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" event={"ID":"f8f3c87c-9080-4011-97f8-2d04ddeee5f6","Type":"ContainerStarted","Data":"8fff39bc27921dbbf8839ad996d7f38e5f2286ad44e749f8e1c72c8ac938343b"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.076275 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.076404 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8ac8c178c653b081f8584012b387fc6778db888b4d52eacb57c48bd445cea053"} Dec 09 16:58:46 crc kubenswrapper[4853]: E1209 16:58:46.076678 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.57666699 +0000 UTC m=+153.511406172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.077323 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" event={"ID":"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71","Type":"ContainerStarted","Data":"fdc9812e0d641e2e992479a3a5970b110475047e75dfb9592684787df6b739cb"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.085014 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xl5jc" event={"ID":"7b988e02-a4b1-4b17-a197-db4b5ad46e99","Type":"ContainerStarted","Data":"7e69a1a2db57c0990a5cb5f26ea1dc4f6581ebac431c0d206791d1da71a44022"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.109393 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" event={"ID":"dfbe3f08-01f6-419f-9f29-fcf674b02167","Type":"ContainerStarted","Data":"1c7e7ad8bbeb080841816bdba2e47aa9cf3c5edf7383dbdb29dfd64d3aa818c1"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.109440 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" event={"ID":"dfbe3f08-01f6-419f-9f29-fcf674b02167","Type":"ContainerStarted","Data":"dec71e5682900985a96f2b1849d764b010b71340a64352722ba5220402111131"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.112169 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-svwfg" podStartSLOduration=130.112156328 podStartE2EDuration="2m10.112156328s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:46.111416425 +0000 UTC m=+153.046155617" watchObservedRunningTime="2025-12-09 16:58:46.112156328 +0000 UTC m=+153.046895510" Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.132867 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" event={"ID":"9a91e971-b137-461e-b373-9b44ded89e8e","Type":"ContainerStarted","Data":"a04821457d1ffec0ed0c7e6bf80276be1891e6bcca05b6fc45bf8a008fd58106"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.132909 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" event={"ID":"9a91e971-b137-461e-b373-9b44ded89e8e","Type":"ContainerStarted","Data":"2848713a51a080bb3addae03d8931400493132e1fb56b51516506a5e5ffebbe6"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.138855 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-smchn" podStartSLOduration=130.138838847 podStartE2EDuration="2m10.138838847s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:46.137498654 +0000 UTC m=+153.072237846" watchObservedRunningTime="2025-12-09 16:58:46.138838847 +0000 UTC m=+153.073578039" Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.164725 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xp79b" event={"ID":"71b908be-495e-4eb2-8429-56c89e4344f4","Type":"ContainerStarted","Data":"b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.176970 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.177678 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4gqhk" podStartSLOduration=130.177661261 podStartE2EDuration="2m10.177661261s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:46.176125072 +0000 UTC m=+153.110864264" watchObservedRunningTime="2025-12-09 16:58:46.177661261 +0000 UTC m=+153.112400453" Dec 09 16:58:46 crc kubenswrapper[4853]: E1209 16:58:46.178697 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.678678173 +0000 UTC m=+153.613417405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.191887 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" event={"ID":"044a407a-76c8-49bc-8d24-040de15c1b88","Type":"ContainerStarted","Data":"b05d80d569297adeb54c00131fbc3f5fda749591ca7dae3ec70b14f88483b7a5"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.211622 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" event={"ID":"0cf0e66b-9edb-4648-b675-1ffa480ec186","Type":"ContainerStarted","Data":"65556fc1b1c247e9766389be3d86df3672e6cf0a970931f54345cf29971c258b"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.218385 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xp79b" podStartSLOduration=130.218365755 podStartE2EDuration="2m10.218365755s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:46.217315932 +0000 UTC m=+153.152055124" watchObservedRunningTime="2025-12-09 16:58:46.218365755 +0000 UTC m=+153.153104937" Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.237607 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" event={"ID":"3c3fd371-664b-41dc-8a29-6295563c0523","Type":"ContainerStarted","Data":"a917cc7ffc7981c5729dd830211959ee1c3894982fcb88c08a876e80a6c97d34"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.240454 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" event={"ID":"8deaa879-6b9f-4b7b-8cef-f48461d13c5f","Type":"ContainerStarted","Data":"5bf10796263fdf1328f90a4d93dc1ec2adcd42789f950d85afcf774d4d323ccb"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.248917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" event={"ID":"6ca85555-03f6-4585-bef7-a30cfdce8a59","Type":"ContainerStarted","Data":"e09dfef6b44e1e8c4b5ef0934b966aa75bf6b079305300fc505b5537f5babd57"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.250107 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" event={"ID":"72ec5fd6-473e-497f-b946-4bed1cd7a594","Type":"ContainerStarted","Data":"2a2afb5cae5dfdb364121f63fb9a0abe812a06d3ad03f99a604f1d26058570f0"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.252364 4853 generic.go:334] "Generic (PLEG): container finished" podID="dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74" containerID="1b0a309faaa1c9a5f7ece3318141dd06dde38e881280a071b00e337f80a7e65b" exitCode=0 Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.252413 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" event={"ID":"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74","Type":"ContainerDied","Data":"1b0a309faaa1c9a5f7ece3318141dd06dde38e881280a071b00e337f80a7e65b"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.256210 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" event={"ID":"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c","Type":"ContainerStarted","Data":"db84dbd33abc8f0b528f78d69b6f8a3d9a99a311841263e3ff5c10be426d7024"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.258372 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" event={"ID":"c4dc629c-36aa-41e2-8daa-a4953c31ecfa","Type":"ContainerStarted","Data":"e65dab0fab13930229af668a5bc125507afdc8c665fe89be38eff54732bb03be"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.259988 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" event={"ID":"a4118aad-5782-4909-a5df-28f0f772ef10","Type":"ContainerStarted","Data":"c7b59d99c976a0f60bf31ec99f65584e9fb31ba7713d4e6cf24d48e879929bd2"} Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.281304 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:46 crc kubenswrapper[4853]: E1209 16:58:46.282531 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.782517045 +0000 UTC m=+153.717256227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.386215 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:46 crc kubenswrapper[4853]: E1209 16:58:46.386620 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.886585314 +0000 UTC m=+153.821324496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.493648 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:46 crc kubenswrapper[4853]: E1209 16:58:46.494014 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:46.99399916 +0000 UTC m=+153.928738342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.597150 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:46 crc kubenswrapper[4853]: E1209 16:58:46.598473 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.09845178 +0000 UTC m=+154.033190962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.702952 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:46 crc kubenswrapper[4853]: E1209 16:58:46.703368 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.203353315 +0000 UTC m=+154.138092497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.804440 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:46 crc kubenswrapper[4853]: E1209 16:58:46.804902 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.304884234 +0000 UTC m=+154.239623416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:46 crc kubenswrapper[4853]: I1209 16:58:46.906672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:46 crc kubenswrapper[4853]: E1209 16:58:46.907220 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.407200137 +0000 UTC m=+154.341939319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.014105 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.014772 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.514755087 +0000 UTC m=+154.449494269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.051367 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:47 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:47 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:47 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.051415 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.119262 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.119684 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.619666292 +0000 UTC m=+154.554405474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.220329 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.220451 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.720432466 +0000 UTC m=+154.655171648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.220817 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.221148 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.721138429 +0000 UTC m=+154.655877611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.265636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" event={"ID":"a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c","Type":"ContainerStarted","Data":"59678dec3d4a019d11545a2329590bb8ac6701cfc34fac36fd22adf8eb5445b0"} Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.266536 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.267932 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" event={"ID":"72ec5fd6-473e-497f-b946-4bed1cd7a594","Type":"ContainerStarted","Data":"d5e50ad5ebe96935bc0c17018eeb8cbcb4cf212b877a054e2ed5d355a7363aac"} Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.268391 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.268798 4853 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2jncz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.268828 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" podUID="a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.269440 4853 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-crptw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.269513 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" podUID="72ec5fd6-473e-497f-b946-4bed1cd7a594" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.270070 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4de173c909a4f873fa2030ba8b9c02fa19935ef895bca664bef12b963d68a08c"} Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.271526 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xl5jc" event={"ID":"7b988e02-a4b1-4b17-a197-db4b5ad46e99","Type":"ContainerStarted","Data":"8dcd8b827324920d5b2846b49c79b20095a6ea3838f2df2bbbf299cca1a206a0"} Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.273151 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" event={"ID":"3c3fd371-664b-41dc-8a29-6295563c0523","Type":"ContainerStarted","Data":"970a0a1c4f1f0f7258e744b0489656fa78a3533de1ea4020df32c6242cc256a7"} Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.274126 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" event={"ID":"7e25b771-f015-46fc-ad94-e8b9aa6b49cb","Type":"ContainerStarted","Data":"4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52"} Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.274680 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.276633 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cw4kq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.276663 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" podUID="7e25b771-f015-46fc-ad94-e8b9aa6b49cb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.277865 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" event={"ID":"0cf0e66b-9edb-4648-b675-1ffa480ec186","Type":"ContainerStarted","Data":"b47c5d013a57bf00fa051a16db9f9d9a66335a0d034006694c9912a8cca42788"} Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.322670 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.323955 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.823934337 +0000 UTC m=+154.758673579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.327215 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" event={"ID":"a902e816-a901-45c7-b806-bb25b1811c09","Type":"ContainerStarted","Data":"ae75719cbdb432a9a2d8a5f4e9787602d427469e522d3c2188dd249d601238ff"} Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.332914 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" event={"ID":"c4dc629c-36aa-41e2-8daa-a4953c31ecfa","Type":"ContainerStarted","Data":"1448b870f44c68f08d0b17e83cdf9d258f1629276d10b88f6f9ea3241bcb6881"} Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.425609 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.425996 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:47.925983782 +0000 UTC m=+154.860723044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.485263 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" podStartSLOduration=131.485243277 podStartE2EDuration="2m11.485243277s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:47.319130794 +0000 UTC m=+154.253869976" watchObservedRunningTime="2025-12-09 16:58:47.485243277 +0000 UTC m=+154.419982459" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.487032 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" podStartSLOduration=131.487023183 podStartE2EDuration="2m11.487023183s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:47.484853073 +0000 UTC m=+154.419592255" watchObservedRunningTime="2025-12-09 16:58:47.487023183 +0000 UTC m=+154.421762365" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.526963 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.527131 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.027100628 +0000 UTC m=+154.961839820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.527275 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.527613 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.027578183 +0000 UTC m=+154.962317395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.551995 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7f2qt" podStartSLOduration=131.551974258 podStartE2EDuration="2m11.551974258s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:47.527076537 +0000 UTC m=+154.461815739" watchObservedRunningTime="2025-12-09 16:58:47.551974258 +0000 UTC m=+154.486713440" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.567811 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xl5jc" podStartSLOduration=10.567781591 podStartE2EDuration="10.567781591s" podCreationTimestamp="2025-12-09 16:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:47.554445516 +0000 UTC m=+154.489184698" watchObservedRunningTime="2025-12-09 16:58:47.567781591 +0000 UTC m=+154.502520763" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.602035 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-g2xml" podStartSLOduration=131.602017109 podStartE2EDuration="2m11.602017109s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:47.601881434 +0000 UTC m=+154.536620616" watchObservedRunningTime="2025-12-09 16:58:47.602017109 +0000 UTC m=+154.536756291" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.602683 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" podStartSLOduration=131.6026781 podStartE2EDuration="2m11.6026781s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:47.576846858 +0000 UTC m=+154.511586040" watchObservedRunningTime="2025-12-09 16:58:47.6026781 +0000 UTC m=+154.537417282" Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.628744 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.629126 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.129110701 +0000 UTC m=+155.063849873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.734315 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.735177 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.235162683 +0000 UTC m=+155.169901865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.836322 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.836762 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.336743582 +0000 UTC m=+155.271482764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:47 crc kubenswrapper[4853]: I1209 16:58:47.938176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:47 crc kubenswrapper[4853]: E1209 16:58:47.938642 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.43857988 +0000 UTC m=+155.373319062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.027774 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:48 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:48 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:48 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.027827 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.039273 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.039488 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.539461257 +0000 UTC m=+155.474200439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.039756 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.040154 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.54014658 +0000 UTC m=+155.474885762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.141179 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.141373 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.641334937 +0000 UTC m=+155.576074119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.141485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.141803 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.641794482 +0000 UTC m=+155.576533664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.243022 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.243192 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.743154354 +0000 UTC m=+155.677893546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.243311 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.243646 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.743635909 +0000 UTC m=+155.678375091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.295147 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.295228 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.314617 4853 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ktltc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]log ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]etcd ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/generic-apiserver-start-informers ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/max-in-flight-filter ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 09 16:58:48 crc kubenswrapper[4853]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/project.openshift.io-projectcache ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/openshift.io-startinformers ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 09 16:58:48 crc kubenswrapper[4853]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 09 16:58:48 crc kubenswrapper[4853]: livez check failed Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.314670 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" podUID="b2b1f86c-b414-418d-973d-6db3442a1bd1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.337665 4853 patch_prober.go:28] interesting pod/console-operator-58897d9998-jk2pg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.337733 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jk2pg" podUID="e1112df0-82ff-460a-b9c3-e3bc662862d1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.337837 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.341978 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" event={"ID":"42d1afc3-f774-4b6c-8dc5-ca20727f4203","Type":"ContainerStarted","Data":"09871c6f7bd6bcd1cd701821719484ca95884684f54ca272c6cc8953518353cb"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.343812 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.343997 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.843966739 +0000 UTC m=+155.778705931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.344297 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.344640 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.84462577 +0000 UTC m=+155.779364952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.345309 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" event={"ID":"fb13fbe9-b51b-4cc5-a16d-7f54fe1e640f","Type":"ContainerStarted","Data":"ac990c248377c6ba2d0c3fca08c7d32c1bad7492167694bf260b4f3066f87585"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.383686 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" event={"ID":"9833f054-9348-44a7-80b8-2ecfa3a363ce","Type":"ContainerStarted","Data":"9c0c26270e03bf329457003abb5d6b5ed7097f002c40087e23fbaf9e7ec86961"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.389823 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" event={"ID":"c4dc629c-36aa-41e2-8daa-a4953c31ecfa","Type":"ContainerStarted","Data":"b69ba37e28fd71052bb36ed81e94266af873b3d46f21751c2f43c1d77945e646"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.402765 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" event={"ID":"6ca85555-03f6-4585-bef7-a30cfdce8a59","Type":"ContainerStarted","Data":"5f2380af2054a9d885d9ebcffc74da3fcdabe623c5e2bffed0d05423f3963e67"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.406818 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w78wr" event={"ID":"b5d46501-ce60-405c-8906-6983a65670fb","Type":"ContainerStarted","Data":"5b08f62fd72b0a89a25a4d7d043bf574c2bcb14dd0c93ebd2601a50ce27739a7"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.412350 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" event={"ID":"dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74","Type":"ContainerStarted","Data":"56c24be521e2c1adece3ebc481bd3103037495b0b6520a8316ac4f46b007cf32"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.414589 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" event={"ID":"e193fa59-1518-4852-9801-29fd9d4afcb9","Type":"ContainerStarted","Data":"38036b007a1b077ad02abe8b33bef959a9cd4950bcc14f5b05840bc3c11a3006"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.414646 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" event={"ID":"e193fa59-1518-4852-9801-29fd9d4afcb9","Type":"ContainerStarted","Data":"3fd8668f8926449943d189a9a8d681a27a410f4fd8664906dbb03e8e13db95b5"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.415232 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.427964 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" event={"ID":"3c3fd371-664b-41dc-8a29-6295563c0523","Type":"ContainerStarted","Data":"d8f524e87d2a51c99f2124308d06f2c189a3765be009157327cfeecfb8c42acf"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.443233 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" event={"ID":"8deaa879-6b9f-4b7b-8cef-f48461d13c5f","Type":"ContainerStarted","Data":"dae8b91644aa2bd75ff7c75442d71c98578370ab5cc0c2266d322671ec4ff5ca"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.445488 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d" event={"ID":"06e2e19c-ec79-45f5-bc19-cfb6d587c6a0","Type":"ContainerStarted","Data":"a7dd5a90e6199f9300f24572b1b2b4a5caac8cbdd3a233682809c1056fb6e143"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.445523 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d" event={"ID":"06e2e19c-ec79-45f5-bc19-cfb6d587c6a0","Type":"ContainerStarted","Data":"87c7377430ce0d3ac79b2b237347885f5a8026f62663cb9cf8cd974d607ce2ff"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.447012 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0edac76865831ddc022021e591c6f6c0a5498c92b3538a3b82729a0fae0c94f4"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.448251 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" event={"ID":"b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71","Type":"ContainerStarted","Data":"668eea5dbf1156a0d222f59b31f01cfab8c3476fe716798ee831d15aeff23252"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.449887 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.449958 4853 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lvvxg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.449985 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" podUID="b1d0a72d-67ac-4141-8d8c-39ac7d6e8f71" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.455145 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" event={"ID":"42c79cc6-b6ae-4e95-b078-c3341d0c6f7a","Type":"ContainerStarted","Data":"31b517f8aef595c85902437af54aa2656ebab883f61da3173e7e7d04f563d49d"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.458456 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5grrn" event={"ID":"7585c230-8db6-45bd-bd39-17d27ff826dd","Type":"ContainerStarted","Data":"d3e3bdd818dab1aa86ca277a5ed545b9f7b3e53bb9e5e7106384b3894847c8e8"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.458486 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5grrn" event={"ID":"7585c230-8db6-45bd-bd39-17d27ff826dd","Type":"ContainerStarted","Data":"3021784e26d885802b7b5a4be135928bf14698a35d5367b9b352ed8a65664b21"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.458880 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.464084 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.465425 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:48.9654063 +0000 UTC m=+155.900145482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.492459 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" event={"ID":"a4118aad-5782-4909-a5df-28f0f772ef10","Type":"ContainerStarted","Data":"cc6080e75e72f735b8ce1a571713c40291fa23d00c62971a2f006c4b3cfb3ca0"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.502225 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-984fm" podStartSLOduration=132.502204631 podStartE2EDuration="2m12.502204631s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.502039866 +0000 UTC m=+155.436779058" watchObservedRunningTime="2025-12-09 16:58:48.502204631 +0000 UTC m=+155.436943813" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.503842 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tmvrp" podStartSLOduration=132.503834632 podStartE2EDuration="2m12.503834632s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.46541759 +0000 UTC m=+155.400156772" watchObservedRunningTime="2025-12-09 16:58:48.503834632 +0000 UTC m=+155.438573814" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.522037 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" event={"ID":"044a407a-76c8-49bc-8d24-040de15c1b88","Type":"ContainerStarted","Data":"24be47473c57aec42be80dc4804351a6e99a19c346b7ae8e026e00cda2452b49"} Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.529341 4853 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2jncz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.529393 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" podUID="a510c7e2-6fd7-421c-8aa8-06aedf8b5a5c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.529463 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cw4kq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.529518 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" podUID="7e25b771-f015-46fc-ad94-e8b9aa6b49cb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.529621 4853 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-crptw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.529635 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" podUID="72ec5fd6-473e-497f-b946-4bed1cd7a594" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.555977 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jk2pg" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.567762 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.577958 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.077941439 +0000 UTC m=+156.012680621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.580563 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vwzjm" podStartSLOduration=133.580547552 podStartE2EDuration="2m13.580547552s" podCreationTimestamp="2025-12-09 16:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.556539758 +0000 UTC m=+155.491278940" watchObservedRunningTime="2025-12-09 16:58:48.580547552 +0000 UTC m=+155.515286734" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.620032 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hkdtl" podStartSLOduration=132.620013016 podStartE2EDuration="2m12.620013016s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.591701266 +0000 UTC m=+155.526440448" watchObservedRunningTime="2025-12-09 16:58:48.620013016 +0000 UTC m=+155.554752198" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.620741 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6cmx8" podStartSLOduration=132.620732529 podStartE2EDuration="2m12.620732529s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.618991703 +0000 UTC m=+155.553730885" watchObservedRunningTime="2025-12-09 16:58:48.620732529 +0000 UTC m=+155.555471711" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.668269 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gv85n" podStartSLOduration=132.66825073 podStartE2EDuration="2m12.66825073s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.666826064 +0000 UTC m=+155.601565246" watchObservedRunningTime="2025-12-09 16:58:48.66825073 +0000 UTC m=+155.602989912" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.669654 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.671119 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.17109814 +0000 UTC m=+156.105837322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.684009 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.734775 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qm8cq" podStartSLOduration=132.734759925 podStartE2EDuration="2m12.734759925s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.733254807 +0000 UTC m=+155.667993989" watchObservedRunningTime="2025-12-09 16:58:48.734759925 +0000 UTC m=+155.669499107" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.807157 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.808371 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.807589 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" podStartSLOduration=132.80756667 podStartE2EDuration="2m12.80756667s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.804364837 +0000 UTC m=+155.739104019" watchObservedRunningTime="2025-12-09 16:58:48.80756667 +0000 UTC m=+155.742305852" Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.809870 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.309854502 +0000 UTC m=+156.244593684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.837534 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8qdc9" podStartSLOduration=132.837517981 podStartE2EDuration="2m12.837517981s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.836411577 +0000 UTC m=+155.771150759" watchObservedRunningTime="2025-12-09 16:58:48.837517981 +0000 UTC m=+155.772257163" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.873563 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" podStartSLOduration=132.873546687 podStartE2EDuration="2m12.873546687s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.870268533 +0000 UTC m=+155.805007715" watchObservedRunningTime="2025-12-09 16:58:48.873546687 +0000 UTC m=+155.808285869" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.918680 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:48 crc kubenswrapper[4853]: E1209 16:58:48.919262 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.419236469 +0000 UTC m=+156.353975661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.952615 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" podStartSLOduration=132.95257433 podStartE2EDuration="2m12.95257433s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.917659 +0000 UTC m=+155.852398202" watchObservedRunningTime="2025-12-09 16:58:48.95257433 +0000 UTC m=+155.887313512" Dec 09 16:58:48 crc kubenswrapper[4853]: I1209 16:58:48.954212 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kzd9d" podStartSLOduration=132.954205442 podStartE2EDuration="2m12.954205442s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:48.947777878 +0000 UTC m=+155.882517060" watchObservedRunningTime="2025-12-09 16:58:48.954205442 +0000 UTC m=+155.888944624" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.019723 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.020456 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.520439258 +0000 UTC m=+156.455178440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.021322 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5grrn" podStartSLOduration=12.021305445 podStartE2EDuration="12.021305445s" podCreationTimestamp="2025-12-09 16:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:49.018882058 +0000 UTC m=+155.953621250" watchObservedRunningTime="2025-12-09 16:58:49.021305445 +0000 UTC m=+155.956044627" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.039782 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:49 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:49 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:49 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.039834 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.058007 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" podStartSLOduration=133.057991372 podStartE2EDuration="2m13.057991372s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:49.057087783 +0000 UTC m=+155.991826965" watchObservedRunningTime="2025-12-09 16:58:49.057991372 +0000 UTC m=+155.992730554" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.124282 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.124636 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.6246201 +0000 UTC m=+156.559359282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.129458 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4bfgv" podStartSLOduration=133.129436053 podStartE2EDuration="2m13.129436053s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:49.092720305 +0000 UTC m=+156.027459487" watchObservedRunningTime="2025-12-09 16:58:49.129436053 +0000 UTC m=+156.064175245" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.233428 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.233787 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.733773211 +0000 UTC m=+156.668512393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.334755 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.335007 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.834987599 +0000 UTC m=+156.769726781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.335043 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.335387 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.835378871 +0000 UTC m=+156.770118043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.352369 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-ccm5l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.352429 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-ccm5l container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.352458 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ccm5l" podUID="5b33f6ba-88ff-4fd0-876d-871cf36db1cf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.352482 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ccm5l" podUID="5b33f6ba-88ff-4fd0-876d-871cf36db1cf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.418337 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.418387 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.423200 4853 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-4dwl5 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.423252 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" podUID="dcbe2e3a-9876-4cd2-ba0a-5a3d4fce7e74" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.435919 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.436086 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.936065403 +0000 UTC m=+156.870804585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.436172 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.436476 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:49.936469125 +0000 UTC m=+156.871208307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.447090 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.447133 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.448728 4853 patch_prober.go:28] interesting pod/console-f9d7485db-xp79b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.448797 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xp79b" podUID="71b908be-495e-4eb2-8429-56c89e4344f4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.525988 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w78wr" event={"ID":"b5d46501-ce60-405c-8906-6983a65670fb","Type":"ContainerStarted","Data":"7eecc5aa8ba167ec3b8d89c514e561f3be9867f2145bc4d7d3168230b7ac2a15"} Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.535794 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cw4kq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.535866 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" podUID="7e25b771-f015-46fc-ad94-e8b9aa6b49cb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.536821 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.537149 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:50.037133496 +0000 UTC m=+156.971872678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.558668 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2jncz" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.599406 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-crptw" Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.638736 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.639584 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:50.139566713 +0000 UTC m=+157.074305895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.748013 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.748435 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:50.248414583 +0000 UTC m=+157.183153765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.849645 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.850029 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:50.350014285 +0000 UTC m=+157.284753467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.950927 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.951084 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:50.451053717 +0000 UTC m=+157.385792899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:49 crc kubenswrapper[4853]: I1209 16:58:49.951221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:49 crc kubenswrapper[4853]: E1209 16:58:49.951538 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:50.451525882 +0000 UTC m=+157.386265064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.023945 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.027294 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:50 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:50 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:50 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.027373 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.052854 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:50 crc kubenswrapper[4853]: E1209 16:58:50.053034 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 16:58:50.553005838 +0000 UTC m=+157.487745080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.154839 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:50 crc kubenswrapper[4853]: E1209 16:58:50.155322 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 16:58:50.655301021 +0000 UTC m=+157.590040203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-57bs5" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.166556 4853 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.232614 4853 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-09T16:58:50.166609631Z","Handler":null,"Name":""} Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.247669 4853 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.247718 4853 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.256384 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.268718 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.303121 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.358382 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.366381 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.366420 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.440034 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-57bs5\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.445453 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lvvxg" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.486637 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t2fjn"] Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.487826 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.490410 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.534544 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w78wr" event={"ID":"b5d46501-ce60-405c-8906-6983a65670fb","Type":"ContainerStarted","Data":"358a365dcac18d55a4bb70defbd4994e98cb01211ba62a95fa0149d554f067a0"} Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.534658 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w78wr" event={"ID":"b5d46501-ce60-405c-8906-6983a65670fb","Type":"ContainerStarted","Data":"0a7cb283221fad1a6278bc0339ab2c4bec0e43f616db648b87c2436dbde39031"} Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.562163 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-utilities\") pod \"community-operators-t2fjn\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.562234 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-catalog-content\") pod \"community-operators-t2fjn\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.562298 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p74b\" (UniqueName: \"kubernetes.io/projected/e2b524f3-9a6d-4d43-a023-3b8deee90128-kube-api-access-6p74b\") pod \"community-operators-t2fjn\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.570929 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t2fjn"] Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.579033 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-w78wr" podStartSLOduration=13.579014313 podStartE2EDuration="13.579014313s" podCreationTimestamp="2025-12-09 16:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:50.578803616 +0000 UTC m=+157.513542798" watchObservedRunningTime="2025-12-09 16:58:50.579014313 +0000 UTC m=+157.513753495" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.670420 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-utilities\") pod \"community-operators-t2fjn\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.670489 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-catalog-content\") pod \"community-operators-t2fjn\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.670523 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p74b\" (UniqueName: \"kubernetes.io/projected/e2b524f3-9a6d-4d43-a023-3b8deee90128-kube-api-access-6p74b\") pod \"community-operators-t2fjn\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.671151 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-catalog-content\") pod \"community-operators-t2fjn\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.671270 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-utilities\") pod \"community-operators-t2fjn\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.677274 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.685503 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vtlzd"] Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.686444 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.688468 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.705672 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p74b\" (UniqueName: \"kubernetes.io/projected/e2b524f3-9a6d-4d43-a023-3b8deee90128-kube-api-access-6p74b\") pod \"community-operators-t2fjn\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.709411 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtlzd"] Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.771937 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5h7h\" (UniqueName: \"kubernetes.io/projected/ed152681-91c0-40d0-be74-21f8e751080d-kube-api-access-d5h7h\") pod \"certified-operators-vtlzd\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.772046 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-catalog-content\") pod \"certified-operators-vtlzd\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.772111 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-utilities\") pod \"certified-operators-vtlzd\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.802672 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.871324 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8wf89"] Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.872955 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5h7h\" (UniqueName: \"kubernetes.io/projected/ed152681-91c0-40d0-be74-21f8e751080d-kube-api-access-d5h7h\") pod \"certified-operators-vtlzd\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.873043 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-catalog-content\") pod \"certified-operators-vtlzd\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.873086 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-utilities\") pod \"certified-operators-vtlzd\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.873853 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-catalog-content\") pod \"certified-operators-vtlzd\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.874222 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.875092 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-utilities\") pod \"certified-operators-vtlzd\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.883742 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8wf89"] Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.908707 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5h7h\" (UniqueName: \"kubernetes.io/projected/ed152681-91c0-40d0-be74-21f8e751080d-kube-api-access-d5h7h\") pod \"certified-operators-vtlzd\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.940355 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.944481 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.950246 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.950692 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.953744 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.984070 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrznq\" (UniqueName: \"kubernetes.io/projected/c4e52886-10d9-49ec-8160-091f821e2cda-kube-api-access-wrznq\") pod \"community-operators-8wf89\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.984143 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-catalog-content\") pod \"community-operators-8wf89\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.984178 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-utilities\") pod \"community-operators-8wf89\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.984265 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38d612a1-d15b-4a4a-803d-78e655a8782c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38d612a1-d15b-4a4a-803d-78e655a8782c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:50 crc kubenswrapper[4853]: I1209 16:58:50.984295 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38d612a1-d15b-4a4a-803d-78e655a8782c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38d612a1-d15b-4a4a-803d-78e655a8782c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.032424 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:51 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:51 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:51 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.032480 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.034485 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57bs5"] Dec 09 16:58:51 crc kubenswrapper[4853]: W1209 16:58:51.048441 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ab11c3_427d_46dd_83e4_038afc30574a.slice/crio-31b390e8e7fad87bf5b0264658bc53b8351d8117c4f3ab5b1993bc2dbf558f5c WatchSource:0}: Error finding container 31b390e8e7fad87bf5b0264658bc53b8351d8117c4f3ab5b1993bc2dbf558f5c: Status 404 returned error can't find the container with id 31b390e8e7fad87bf5b0264658bc53b8351d8117c4f3ab5b1993bc2dbf558f5c Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.054888 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.078991 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4p7jc"] Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.081808 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.084853 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38d612a1-d15b-4a4a-803d-78e655a8782c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38d612a1-d15b-4a4a-803d-78e655a8782c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.084885 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38d612a1-d15b-4a4a-803d-78e655a8782c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38d612a1-d15b-4a4a-803d-78e655a8782c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.084909 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrznq\" (UniqueName: \"kubernetes.io/projected/c4e52886-10d9-49ec-8160-091f821e2cda-kube-api-access-wrznq\") pod \"community-operators-8wf89\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.084939 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-catalog-content\") pod \"community-operators-8wf89\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.084965 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-utilities\") pod \"community-operators-8wf89\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.085403 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38d612a1-d15b-4a4a-803d-78e655a8782c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38d612a1-d15b-4a4a-803d-78e655a8782c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.085429 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-utilities\") pod \"community-operators-8wf89\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.085872 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-catalog-content\") pod \"community-operators-8wf89\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.094228 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4p7jc"] Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.120417 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38d612a1-d15b-4a4a-803d-78e655a8782c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38d612a1-d15b-4a4a-803d-78e655a8782c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.125296 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrznq\" (UniqueName: \"kubernetes.io/projected/c4e52886-10d9-49ec-8160-091f821e2cda-kube-api-access-wrznq\") pod \"community-operators-8wf89\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.188456 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-catalog-content\") pod \"certified-operators-4p7jc\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.188554 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-utilities\") pod \"certified-operators-4p7jc\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.188572 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khwd6\" (UniqueName: \"kubernetes.io/projected/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-kube-api-access-khwd6\") pod \"certified-operators-4p7jc\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.260159 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t2fjn"] Dec 09 16:58:51 crc kubenswrapper[4853]: W1209 16:58:51.270348 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2b524f3_9a6d_4d43_a023_3b8deee90128.slice/crio-ebb7fb890bd9078f12f0986b5743de2599f7cf1100b2287b194d5a39a48c6d84 WatchSource:0}: Error finding container ebb7fb890bd9078f12f0986b5743de2599f7cf1100b2287b194d5a39a48c6d84: Status 404 returned error can't find the container with id ebb7fb890bd9078f12f0986b5743de2599f7cf1100b2287b194d5a39a48c6d84 Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.287187 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wf89" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.289432 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-catalog-content\") pod \"certified-operators-4p7jc\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.289515 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-utilities\") pod \"certified-operators-4p7jc\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.289535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khwd6\" (UniqueName: \"kubernetes.io/projected/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-kube-api-access-khwd6\") pod \"certified-operators-4p7jc\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.290131 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-catalog-content\") pod \"certified-operators-4p7jc\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.291113 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-utilities\") pod \"certified-operators-4p7jc\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.299407 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.309167 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khwd6\" (UniqueName: \"kubernetes.io/projected/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-kube-api-access-khwd6\") pod \"certified-operators-4p7jc\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.310983 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vtlzd"] Dec 09 16:58:51 crc kubenswrapper[4853]: W1209 16:58:51.329687 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded152681_91c0_40d0_be74_21f8e751080d.slice/crio-f5da855fd30cc91c38ae71557b78a675e82cb9a5a114d6b830f58c51e4e92477 WatchSource:0}: Error finding container f5da855fd30cc91c38ae71557b78a675e82cb9a5a114d6b830f58c51e4e92477: Status 404 returned error can't find the container with id f5da855fd30cc91c38ae71557b78a675e82cb9a5a114d6b830f58c51e4e92477 Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.414315 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.550821 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8wf89"] Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.560214 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" event={"ID":"d6ab11c3-427d-46dd-83e4-038afc30574a","Type":"ContainerStarted","Data":"73213c5761a8f674271e53f0d4d16c38ce03c1c97df374a5ebb39866348dd9bc"} Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.560276 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" event={"ID":"d6ab11c3-427d-46dd-83e4-038afc30574a","Type":"ContainerStarted","Data":"31b390e8e7fad87bf5b0264658bc53b8351d8117c4f3ab5b1993bc2dbf558f5c"} Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.560779 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.561033 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.568827 4853 generic.go:334] "Generic (PLEG): container finished" podID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerID="033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86" exitCode=0 Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.569987 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2fjn" event={"ID":"e2b524f3-9a6d-4d43-a023-3b8deee90128","Type":"ContainerDied","Data":"033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86"} Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.570151 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2fjn" event={"ID":"e2b524f3-9a6d-4d43-a023-3b8deee90128","Type":"ContainerStarted","Data":"ebb7fb890bd9078f12f0986b5743de2599f7cf1100b2287b194d5a39a48c6d84"} Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.581140 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.628072 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.629103 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtlzd" event={"ID":"ed152681-91c0-40d0-be74-21f8e751080d","Type":"ContainerStarted","Data":"4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8"} Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.629418 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtlzd" event={"ID":"ed152681-91c0-40d0-be74-21f8e751080d","Type":"ContainerStarted","Data":"f5da855fd30cc91c38ae71557b78a675e82cb9a5a114d6b830f58c51e4e92477"} Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.685490 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" podStartSLOduration=135.685470563 podStartE2EDuration="2m15.685470563s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:51.591092672 +0000 UTC m=+158.525831854" watchObservedRunningTime="2025-12-09 16:58:51.685470563 +0000 UTC m=+158.620209755" Dec 09 16:58:51 crc kubenswrapper[4853]: I1209 16:58:51.846892 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4p7jc"] Dec 09 16:58:51 crc kubenswrapper[4853]: W1209 16:58:51.990930 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8795b7f_7c27_4e0b_9321_e03a8e520b2a.slice/crio-a50e7e54f63b7d8fb9308e83821e97e778a868c549d802fb554f19416c5acce5 WatchSource:0}: Error finding container a50e7e54f63b7d8fb9308e83821e97e778a868c549d802fb554f19416c5acce5: Status 404 returned error can't find the container with id a50e7e54f63b7d8fb9308e83821e97e778a868c549d802fb554f19416c5acce5 Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.030128 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:52 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:52 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:52 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.030218 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.617806 4853 generic.go:334] "Generic (PLEG): container finished" podID="c4e52886-10d9-49ec-8160-091f821e2cda" containerID="5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b" exitCode=0 Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.618051 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wf89" event={"ID":"c4e52886-10d9-49ec-8160-091f821e2cda","Type":"ContainerDied","Data":"5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b"} Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.618245 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wf89" event={"ID":"c4e52886-10d9-49ec-8160-091f821e2cda","Type":"ContainerStarted","Data":"5ed98101be27e0b055a714fc180367a479b86c7b85f1f08cb5a39564d33abe47"} Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.621529 4853 generic.go:334] "Generic (PLEG): container finished" podID="ed152681-91c0-40d0-be74-21f8e751080d" containerID="4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8" exitCode=0 Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.621627 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtlzd" event={"ID":"ed152681-91c0-40d0-be74-21f8e751080d","Type":"ContainerDied","Data":"4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8"} Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.625648 4853 generic.go:334] "Generic (PLEG): container finished" podID="a4118aad-5782-4909-a5df-28f0f772ef10" containerID="cc6080e75e72f735b8ce1a571713c40291fa23d00c62971a2f006c4b3cfb3ca0" exitCode=0 Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.625724 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" event={"ID":"a4118aad-5782-4909-a5df-28f0f772ef10","Type":"ContainerDied","Data":"cc6080e75e72f735b8ce1a571713c40291fa23d00c62971a2f006c4b3cfb3ca0"} Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.627774 4853 generic.go:334] "Generic (PLEG): container finished" podID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerID="63ef67fd1feb6c5483e999921adb7a6946872dca6042ae1b08ef601d464789f8" exitCode=0 Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.627840 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7jc" event={"ID":"d8795b7f-7c27-4e0b-9321-e03a8e520b2a","Type":"ContainerDied","Data":"63ef67fd1feb6c5483e999921adb7a6946872dca6042ae1b08ef601d464789f8"} Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.627870 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7jc" event={"ID":"d8795b7f-7c27-4e0b-9321-e03a8e520b2a","Type":"ContainerStarted","Data":"a50e7e54f63b7d8fb9308e83821e97e778a868c549d802fb554f19416c5acce5"} Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.636832 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38d612a1-d15b-4a4a-803d-78e655a8782c","Type":"ContainerStarted","Data":"8ee75efb0ab0212622425d8da9af4e4a391a529edd36097138fb510b9c6bff8b"} Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.636866 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38d612a1-d15b-4a4a-803d-78e655a8782c","Type":"ContainerStarted","Data":"85f31dfca56130745306e3513d07ea02fede12f50f87051db3dbb4a976d0a577"} Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.681310 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.681286345 podStartE2EDuration="2.681286345s" podCreationTimestamp="2025-12-09 16:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:58:52.669716137 +0000 UTC m=+159.604455329" watchObservedRunningTime="2025-12-09 16:58:52.681286345 +0000 UTC m=+159.616025527" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.687747 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lg69z"] Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.689152 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.690887 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.696750 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg69z"] Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.818426 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-catalog-content\") pod \"redhat-marketplace-lg69z\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.818484 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-utilities\") pod \"redhat-marketplace-lg69z\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.818529 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtg2z\" (UniqueName: \"kubernetes.io/projected/5baa3927-1796-48fe-9238-27f8717fbe89-kube-api-access-vtg2z\") pod \"redhat-marketplace-lg69z\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.920094 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-utilities\") pod \"redhat-marketplace-lg69z\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.920155 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtg2z\" (UniqueName: \"kubernetes.io/projected/5baa3927-1796-48fe-9238-27f8717fbe89-kube-api-access-vtg2z\") pod \"redhat-marketplace-lg69z\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.920275 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-catalog-content\") pod \"redhat-marketplace-lg69z\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.920683 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-utilities\") pod \"redhat-marketplace-lg69z\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.920962 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-catalog-content\") pod \"redhat-marketplace-lg69z\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:52 crc kubenswrapper[4853]: I1209 16:58:52.942672 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtg2z\" (UniqueName: \"kubernetes.io/projected/5baa3927-1796-48fe-9238-27f8717fbe89-kube-api-access-vtg2z\") pod \"redhat-marketplace-lg69z\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.026130 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:53 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:53 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:53 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.026188 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.030955 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.073888 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rsmkd"] Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.077977 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.081931 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsmkd"] Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.225952 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-utilities\") pod \"redhat-marketplace-rsmkd\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.226352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mvgd\" (UniqueName: \"kubernetes.io/projected/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-kube-api-access-6mvgd\") pod \"redhat-marketplace-rsmkd\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.226427 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-catalog-content\") pod \"redhat-marketplace-rsmkd\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.232806 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg69z"] Dec 09 16:58:53 crc kubenswrapper[4853]: W1209 16:58:53.242020 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5baa3927_1796_48fe_9238_27f8717fbe89.slice/crio-93b6ffa6f9a94682d3d7e1713c38daffb4d919b92d5be31ba03775ac830f7eb9 WatchSource:0}: Error finding container 93b6ffa6f9a94682d3d7e1713c38daffb4d919b92d5be31ba03775ac830f7eb9: Status 404 returned error can't find the container with id 93b6ffa6f9a94682d3d7e1713c38daffb4d919b92d5be31ba03775ac830f7eb9 Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.301353 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.306681 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ktltc" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.327797 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-catalog-content\") pod \"redhat-marketplace-rsmkd\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.327847 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-utilities\") pod \"redhat-marketplace-rsmkd\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.327902 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mvgd\" (UniqueName: \"kubernetes.io/projected/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-kube-api-access-6mvgd\") pod \"redhat-marketplace-rsmkd\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.328404 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-catalog-content\") pod \"redhat-marketplace-rsmkd\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.328502 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-utilities\") pod \"redhat-marketplace-rsmkd\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.358924 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mvgd\" (UniqueName: \"kubernetes.io/projected/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-kube-api-access-6mvgd\") pod \"redhat-marketplace-rsmkd\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.402370 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.639841 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsmkd"] Dec 09 16:58:53 crc kubenswrapper[4853]: W1209 16:58:53.656815 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod058c275f_2ca6_45c0_9dd1_bcd8861c2fb5.slice/crio-291a284d2cb0eff0de851db6dc9d002327912238baa45c44c2bd14ece3e1b314 WatchSource:0}: Error finding container 291a284d2cb0eff0de851db6dc9d002327912238baa45c44c2bd14ece3e1b314: Status 404 returned error can't find the container with id 291a284d2cb0eff0de851db6dc9d002327912238baa45c44c2bd14ece3e1b314 Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.659192 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg69z" event={"ID":"5baa3927-1796-48fe-9238-27f8717fbe89","Type":"ContainerStarted","Data":"93b6ffa6f9a94682d3d7e1713c38daffb4d919b92d5be31ba03775ac830f7eb9"} Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.684609 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rszvg"] Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.685950 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.687491 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.695504 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rszvg"] Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.835203 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-utilities\") pod \"redhat-operators-rszvg\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.835302 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-catalog-content\") pod \"redhat-operators-rszvg\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.835355 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp428\" (UniqueName: \"kubernetes.io/projected/cf894738-9ac9-49cf-a5be-c4414628c89c-kube-api-access-xp428\") pod \"redhat-operators-rszvg\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.936853 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-utilities\") pod \"redhat-operators-rszvg\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.936909 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-catalog-content\") pod \"redhat-operators-rszvg\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.936944 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp428\" (UniqueName: \"kubernetes.io/projected/cf894738-9ac9-49cf-a5be-c4414628c89c-kube-api-access-xp428\") pod \"redhat-operators-rszvg\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.937672 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-utilities\") pod \"redhat-operators-rszvg\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.937727 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-catalog-content\") pod \"redhat-operators-rszvg\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.949322 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:53 crc kubenswrapper[4853]: I1209 16:58:53.958887 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp428\" (UniqueName: \"kubernetes.io/projected/cf894738-9ac9-49cf-a5be-c4414628c89c-kube-api-access-xp428\") pod \"redhat-operators-rszvg\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.027497 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:54 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:54 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:54 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.027657 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.038086 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4118aad-5782-4909-a5df-28f0f772ef10-config-volume\") pod \"a4118aad-5782-4909-a5df-28f0f772ef10\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.038181 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8txg6\" (UniqueName: \"kubernetes.io/projected/a4118aad-5782-4909-a5df-28f0f772ef10-kube-api-access-8txg6\") pod \"a4118aad-5782-4909-a5df-28f0f772ef10\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.038244 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4118aad-5782-4909-a5df-28f0f772ef10-secret-volume\") pod \"a4118aad-5782-4909-a5df-28f0f772ef10\" (UID: \"a4118aad-5782-4909-a5df-28f0f772ef10\") " Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.042150 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4118aad-5782-4909-a5df-28f0f772ef10-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4118aad-5782-4909-a5df-28f0f772ef10" (UID: "a4118aad-5782-4909-a5df-28f0f772ef10"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.042861 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4118aad-5782-4909-a5df-28f0f772ef10-kube-api-access-8txg6" (OuterVolumeSpecName: "kube-api-access-8txg6") pod "a4118aad-5782-4909-a5df-28f0f772ef10" (UID: "a4118aad-5782-4909-a5df-28f0f772ef10"). InnerVolumeSpecName "kube-api-access-8txg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.053152 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4118aad-5782-4909-a5df-28f0f772ef10-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4118aad-5782-4909-a5df-28f0f772ef10" (UID: "a4118aad-5782-4909-a5df-28f0f772ef10"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.068623 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6w4cc"] Dec 09 16:58:54 crc kubenswrapper[4853]: E1209 16:58:54.069396 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4118aad-5782-4909-a5df-28f0f772ef10" containerName="collect-profiles" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.069495 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4118aad-5782-4909-a5df-28f0f772ef10" containerName="collect-profiles" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.069691 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4118aad-5782-4909-a5df-28f0f772ef10" containerName="collect-profiles" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.070467 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.079288 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6w4cc"] Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.112197 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.140356 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8txg6\" (UniqueName: \"kubernetes.io/projected/a4118aad-5782-4909-a5df-28f0f772ef10-kube-api-access-8txg6\") on node \"crc\" DevicePath \"\"" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.140381 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4118aad-5782-4909-a5df-28f0f772ef10-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.140391 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4118aad-5782-4909-a5df-28f0f772ef10-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.252247 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn89c\" (UniqueName: \"kubernetes.io/projected/539542f2-16c1-4479-814c-f39a282d6726-kube-api-access-pn89c\") pod \"redhat-operators-6w4cc\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.252297 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-catalog-content\") pod \"redhat-operators-6w4cc\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.252332 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-utilities\") pod \"redhat-operators-6w4cc\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.341624 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rszvg"] Dec 09 16:58:54 crc kubenswrapper[4853]: W1209 16:58:54.349266 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf894738_9ac9_49cf_a5be_c4414628c89c.slice/crio-a753641ad8b2db91c049871b8affc466599d258893d6eef42069ef6b88432e86 WatchSource:0}: Error finding container a753641ad8b2db91c049871b8affc466599d258893d6eef42069ef6b88432e86: Status 404 returned error can't find the container with id a753641ad8b2db91c049871b8affc466599d258893d6eef42069ef6b88432e86 Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.352897 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn89c\" (UniqueName: \"kubernetes.io/projected/539542f2-16c1-4479-814c-f39a282d6726-kube-api-access-pn89c\") pod \"redhat-operators-6w4cc\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.352937 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-catalog-content\") pod \"redhat-operators-6w4cc\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.352966 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-utilities\") pod \"redhat-operators-6w4cc\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.353436 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-utilities\") pod \"redhat-operators-6w4cc\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.353500 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-catalog-content\") pod \"redhat-operators-6w4cc\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.378379 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn89c\" (UniqueName: \"kubernetes.io/projected/539542f2-16c1-4479-814c-f39a282d6726-kube-api-access-pn89c\") pod \"redhat-operators-6w4cc\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.431427 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.438129 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4dwl5" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.448847 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.716213 4853 generic.go:334] "Generic (PLEG): container finished" podID="5baa3927-1796-48fe-9238-27f8717fbe89" containerID="0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e" exitCode=0 Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.716301 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg69z" event={"ID":"5baa3927-1796-48fe-9238-27f8717fbe89","Type":"ContainerDied","Data":"0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e"} Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.738236 4853 generic.go:334] "Generic (PLEG): container finished" podID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerID="19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a" exitCode=0 Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.738354 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsmkd" event={"ID":"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5","Type":"ContainerDied","Data":"19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a"} Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.738382 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsmkd" event={"ID":"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5","Type":"ContainerStarted","Data":"291a284d2cb0eff0de851db6dc9d002327912238baa45c44c2bd14ece3e1b314"} Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.749992 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" event={"ID":"a4118aad-5782-4909-a5df-28f0f772ef10","Type":"ContainerDied","Data":"c7b59d99c976a0f60bf31ec99f65584e9fb31ba7713d4e6cf24d48e879929bd2"} Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.750029 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b59d99c976a0f60bf31ec99f65584e9fb31ba7713d4e6cf24d48e879929bd2" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.750090 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz" Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.774073 4853 generic.go:334] "Generic (PLEG): container finished" podID="38d612a1-d15b-4a4a-803d-78e655a8782c" containerID="8ee75efb0ab0212622425d8da9af4e4a391a529edd36097138fb510b9c6bff8b" exitCode=0 Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.774208 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38d612a1-d15b-4a4a-803d-78e655a8782c","Type":"ContainerDied","Data":"8ee75efb0ab0212622425d8da9af4e4a391a529edd36097138fb510b9c6bff8b"} Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.785500 4853 generic.go:334] "Generic (PLEG): container finished" podID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerID="d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7" exitCode=0 Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.786179 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rszvg" event={"ID":"cf894738-9ac9-49cf-a5be-c4414628c89c","Type":"ContainerDied","Data":"d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7"} Dec 09 16:58:54 crc kubenswrapper[4853]: I1209 16:58:54.786206 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rszvg" event={"ID":"cf894738-9ac9-49cf-a5be-c4414628c89c","Type":"ContainerStarted","Data":"a753641ad8b2db91c049871b8affc466599d258893d6eef42069ef6b88432e86"} Dec 09 16:58:54 crc kubenswrapper[4853]: E1209 16:58:54.924572 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4118aad_5782_4909_a5df_28f0f772ef10.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4118aad_5782_4909_a5df_28f0f772ef10.slice/crio-c7b59d99c976a0f60bf31ec99f65584e9fb31ba7713d4e6cf24d48e879929bd2\": RecentStats: unable to find data in memory cache]" Dec 09 16:58:55 crc kubenswrapper[4853]: I1209 16:58:55.027902 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:55 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:55 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:55 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:55 crc kubenswrapper[4853]: I1209 16:58:55.027989 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:55 crc kubenswrapper[4853]: I1209 16:58:55.098966 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6w4cc"] Dec 09 16:58:55 crc kubenswrapper[4853]: I1209 16:58:55.802444 4853 generic.go:334] "Generic (PLEG): container finished" podID="539542f2-16c1-4479-814c-f39a282d6726" containerID="d4027e8b85f948e81f5f6fed0fa960f0e1f602bb8d3e2b823f8d6cbc6b73c005" exitCode=0 Dec 09 16:58:55 crc kubenswrapper[4853]: I1209 16:58:55.802565 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w4cc" event={"ID":"539542f2-16c1-4479-814c-f39a282d6726","Type":"ContainerDied","Data":"d4027e8b85f948e81f5f6fed0fa960f0e1f602bb8d3e2b823f8d6cbc6b73c005"} Dec 09 16:58:55 crc kubenswrapper[4853]: I1209 16:58:55.802953 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w4cc" event={"ID":"539542f2-16c1-4479-814c-f39a282d6726","Type":"ContainerStarted","Data":"10fd1029d6c9769b3f1ad4771202c0e757a9c6ded3e89b4ea8a1459e6fa3b3b1"} Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.025898 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:56 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:56 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:56 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.025979 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.079411 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.115886 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38d612a1-d15b-4a4a-803d-78e655a8782c-kubelet-dir\") pod \"38d612a1-d15b-4a4a-803d-78e655a8782c\" (UID: \"38d612a1-d15b-4a4a-803d-78e655a8782c\") " Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.115988 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38d612a1-d15b-4a4a-803d-78e655a8782c-kube-api-access\") pod \"38d612a1-d15b-4a4a-803d-78e655a8782c\" (UID: \"38d612a1-d15b-4a4a-803d-78e655a8782c\") " Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.117743 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d612a1-d15b-4a4a-803d-78e655a8782c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "38d612a1-d15b-4a4a-803d-78e655a8782c" (UID: "38d612a1-d15b-4a4a-803d-78e655a8782c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.123751 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d612a1-d15b-4a4a-803d-78e655a8782c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "38d612a1-d15b-4a4a-803d-78e655a8782c" (UID: "38d612a1-d15b-4a4a-803d-78e655a8782c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.217098 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38d612a1-d15b-4a4a-803d-78e655a8782c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.217143 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38d612a1-d15b-4a4a-803d-78e655a8782c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.249761 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 16:58:56 crc kubenswrapper[4853]: E1209 16:58:56.249990 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d612a1-d15b-4a4a-803d-78e655a8782c" containerName="pruner" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.250005 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d612a1-d15b-4a4a-803d-78e655a8782c" containerName="pruner" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.250157 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d612a1-d15b-4a4a-803d-78e655a8782c" containerName="pruner" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.250651 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.253299 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.253808 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.258615 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.318412 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cfb806ca-de1f-4443-8980-e3171fa60c59-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cfb806ca-de1f-4443-8980-e3171fa60c59\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.318470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cfb806ca-de1f-4443-8980-e3171fa60c59-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cfb806ca-de1f-4443-8980-e3171fa60c59\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.419537 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cfb806ca-de1f-4443-8980-e3171fa60c59-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cfb806ca-de1f-4443-8980-e3171fa60c59\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.419667 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cfb806ca-de1f-4443-8980-e3171fa60c59-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cfb806ca-de1f-4443-8980-e3171fa60c59\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.419963 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cfb806ca-de1f-4443-8980-e3171fa60c59-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cfb806ca-de1f-4443-8980-e3171fa60c59\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.443385 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cfb806ca-de1f-4443-8980-e3171fa60c59-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cfb806ca-de1f-4443-8980-e3171fa60c59\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.584664 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.823916 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38d612a1-d15b-4a4a-803d-78e655a8782c","Type":"ContainerDied","Data":"85f31dfca56130745306e3513d07ea02fede12f50f87051db3dbb4a976d0a577"} Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.823991 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f31dfca56130745306e3513d07ea02fede12f50f87051db3dbb4a976d0a577" Dec 09 16:58:56 crc kubenswrapper[4853]: I1209 16:58:56.824051 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 16:58:57 crc kubenswrapper[4853]: I1209 16:58:57.027502 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:57 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:57 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:57 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:57 crc kubenswrapper[4853]: I1209 16:58:57.027556 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:57 crc kubenswrapper[4853]: I1209 16:58:57.222799 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 16:58:57 crc kubenswrapper[4853]: I1209 16:58:57.830522 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cfb806ca-de1f-4443-8980-e3171fa60c59","Type":"ContainerStarted","Data":"819cf6e488cd41a5cc0b729975b8b80e549f7e248c5be19e73ed7659555d5772"} Dec 09 16:58:57 crc kubenswrapper[4853]: I1209 16:58:57.868196 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5grrn" Dec 09 16:58:58 crc kubenswrapper[4853]: I1209 16:58:58.026947 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:58 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:58 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:58 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:58 crc kubenswrapper[4853]: I1209 16:58:58.027002 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:58 crc kubenswrapper[4853]: I1209 16:58:58.366475 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:58 crc kubenswrapper[4853]: I1209 16:58:58.373220 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d55def8-578d-461b-9514-07eea9c62336-metrics-certs\") pod \"network-metrics-daemon-77995\" (UID: \"7d55def8-578d-461b-9514-07eea9c62336\") " pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:58 crc kubenswrapper[4853]: I1209 16:58:58.488720 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-77995" Dec 09 16:58:58 crc kubenswrapper[4853]: I1209 16:58:58.593722 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 16:58:58 crc kubenswrapper[4853]: I1209 16:58:58.594038 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 16:58:58 crc kubenswrapper[4853]: I1209 16:58:58.860999 4853 generic.go:334] "Generic (PLEG): container finished" podID="cfb806ca-de1f-4443-8980-e3171fa60c59" containerID="a93e0ddd8b6588910e6314c17d2d7da5bef1902d774b6ed17451dbf5baca648b" exitCode=0 Dec 09 16:58:58 crc kubenswrapper[4853]: I1209 16:58:58.861045 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cfb806ca-de1f-4443-8980-e3171fa60c59","Type":"ContainerDied","Data":"a93e0ddd8b6588910e6314c17d2d7da5bef1902d774b6ed17451dbf5baca648b"} Dec 09 16:58:59 crc kubenswrapper[4853]: I1209 16:58:59.015674 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-77995"] Dec 09 16:58:59 crc kubenswrapper[4853]: I1209 16:58:59.027847 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:58:59 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:58:59 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:58:59 crc kubenswrapper[4853]: healthz check failed Dec 09 16:58:59 crc kubenswrapper[4853]: I1209 16:58:59.027903 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:58:59 crc kubenswrapper[4853]: W1209 16:58:59.046057 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d55def8_578d_461b_9514_07eea9c62336.slice/crio-847ee82691d1178ac5a3fd5de9e9ba7a4deb6034dda5d47d816ac154a7d3361b WatchSource:0}: Error finding container 847ee82691d1178ac5a3fd5de9e9ba7a4deb6034dda5d47d816ac154a7d3361b: Status 404 returned error can't find the container with id 847ee82691d1178ac5a3fd5de9e9ba7a4deb6034dda5d47d816ac154a7d3361b Dec 09 16:58:59 crc kubenswrapper[4853]: I1209 16:58:59.383580 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ccm5l" Dec 09 16:58:59 crc kubenswrapper[4853]: I1209 16:58:59.448831 4853 patch_prober.go:28] interesting pod/console-f9d7485db-xp79b container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 09 16:58:59 crc kubenswrapper[4853]: I1209 16:58:59.448909 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xp79b" podUID="71b908be-495e-4eb2-8429-56c89e4344f4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 09 16:58:59 crc kubenswrapper[4853]: I1209 16:58:59.876798 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-77995" event={"ID":"7d55def8-578d-461b-9514-07eea9c62336","Type":"ContainerStarted","Data":"847ee82691d1178ac5a3fd5de9e9ba7a4deb6034dda5d47d816ac154a7d3361b"} Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.027444 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:59:00 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:59:00 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:59:00 crc kubenswrapper[4853]: healthz check failed Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.027525 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.110435 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.205033 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cfb806ca-de1f-4443-8980-e3171fa60c59-kube-api-access\") pod \"cfb806ca-de1f-4443-8980-e3171fa60c59\" (UID: \"cfb806ca-de1f-4443-8980-e3171fa60c59\") " Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.205094 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cfb806ca-de1f-4443-8980-e3171fa60c59-kubelet-dir\") pod \"cfb806ca-de1f-4443-8980-e3171fa60c59\" (UID: \"cfb806ca-de1f-4443-8980-e3171fa60c59\") " Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.205233 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfb806ca-de1f-4443-8980-e3171fa60c59-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cfb806ca-de1f-4443-8980-e3171fa60c59" (UID: "cfb806ca-de1f-4443-8980-e3171fa60c59"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.205370 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cfb806ca-de1f-4443-8980-e3171fa60c59-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.210536 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb806ca-de1f-4443-8980-e3171fa60c59-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cfb806ca-de1f-4443-8980-e3171fa60c59" (UID: "cfb806ca-de1f-4443-8980-e3171fa60c59"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.306154 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cfb806ca-de1f-4443-8980-e3171fa60c59-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.889922 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-77995" event={"ID":"7d55def8-578d-461b-9514-07eea9c62336","Type":"ContainerStarted","Data":"b4bc1adbf9752e804e92a428e6722c37db33b94280355db02c0b7ac744105b53"} Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.896486 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.897335 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cfb806ca-de1f-4443-8980-e3171fa60c59","Type":"ContainerDied","Data":"819cf6e488cd41a5cc0b729975b8b80e549f7e248c5be19e73ed7659555d5772"} Dec 09 16:59:00 crc kubenswrapper[4853]: I1209 16:59:00.897403 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="819cf6e488cd41a5cc0b729975b8b80e549f7e248c5be19e73ed7659555d5772" Dec 09 16:59:01 crc kubenswrapper[4853]: I1209 16:59:01.026124 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:59:01 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:59:01 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:59:01 crc kubenswrapper[4853]: healthz check failed Dec 09 16:59:01 crc kubenswrapper[4853]: I1209 16:59:01.026198 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:59:02 crc kubenswrapper[4853]: I1209 16:59:02.026902 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:59:02 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:59:02 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:59:02 crc kubenswrapper[4853]: healthz check failed Dec 09 16:59:02 crc kubenswrapper[4853]: I1209 16:59:02.027150 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:59:03 crc kubenswrapper[4853]: I1209 16:59:03.025656 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 16:59:03 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Dec 09 16:59:03 crc kubenswrapper[4853]: [+]process-running ok Dec 09 16:59:03 crc kubenswrapper[4853]: healthz check failed Dec 09 16:59:03 crc kubenswrapper[4853]: I1209 16:59:03.025712 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 16:59:04 crc kubenswrapper[4853]: I1209 16:59:04.026255 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:59:04 crc kubenswrapper[4853]: I1209 16:59:04.028907 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ffzns" Dec 09 16:59:06 crc kubenswrapper[4853]: I1209 16:59:06.931150 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-77995" event={"ID":"7d55def8-578d-461b-9514-07eea9c62336","Type":"ContainerStarted","Data":"a462444a87381d122e76e64142992787998a69bdd869981bfa785fbc0f18edae"} Dec 09 16:59:06 crc kubenswrapper[4853]: I1209 16:59:06.950262 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-77995" podStartSLOduration=150.95024641 podStartE2EDuration="2m30.95024641s" podCreationTimestamp="2025-12-09 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:59:06.947551344 +0000 UTC m=+173.882290536" watchObservedRunningTime="2025-12-09 16:59:06.95024641 +0000 UTC m=+173.884985582" Dec 09 16:59:09 crc kubenswrapper[4853]: I1209 16:59:09.452305 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:59:09 crc kubenswrapper[4853]: I1209 16:59:09.463112 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xp79b" Dec 09 16:59:10 crc kubenswrapper[4853]: I1209 16:59:10.685931 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 16:59:19 crc kubenswrapper[4853]: I1209 16:59:19.793479 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 16:59:20 crc kubenswrapper[4853]: I1209 16:59:20.638139 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9frbh" Dec 09 16:59:21 crc kubenswrapper[4853]: E1209 16:59:21.703382 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 16:59:21 crc kubenswrapper[4853]: E1209 16:59:21.703557 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6p74b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t2fjn_openshift-marketplace(e2b524f3-9a6d-4d43-a023-3b8deee90128): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 16:59:21 crc kubenswrapper[4853]: E1209 16:59:21.704746 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t2fjn" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" Dec 09 16:59:26 crc kubenswrapper[4853]: E1209 16:59:26.960327 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t2fjn" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" Dec 09 16:59:28 crc kubenswrapper[4853]: E1209 16:59:28.129716 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 16:59:28 crc kubenswrapper[4853]: E1209 16:59:28.130450 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pn89c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6w4cc_openshift-marketplace(539542f2-16c1-4479-814c-f39a282d6726): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 16:59:28 crc kubenswrapper[4853]: E1209 16:59:28.131701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6w4cc" podUID="539542f2-16c1-4479-814c-f39a282d6726" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.263443 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 16:59:28 crc kubenswrapper[4853]: E1209 16:59:28.263838 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb806ca-de1f-4443-8980-e3171fa60c59" containerName="pruner" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.263862 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb806ca-de1f-4443-8980-e3171fa60c59" containerName="pruner" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.263989 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb806ca-de1f-4443-8980-e3171fa60c59" containerName="pruner" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.264503 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.266039 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.266404 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.269082 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.408493 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.408611 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.510163 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.510254 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.510393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.535764 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.585389 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.592841 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 16:59:28 crc kubenswrapper[4853]: I1209 16:59:28.592902 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 16:59:29 crc kubenswrapper[4853]: E1209 16:59:29.510918 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6w4cc" podUID="539542f2-16c1-4479-814c-f39a282d6726" Dec 09 16:59:29 crc kubenswrapper[4853]: E1209 16:59:29.655366 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 16:59:29 crc kubenswrapper[4853]: E1209 16:59:29.655527 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-khwd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4p7jc_openshift-marketplace(d8795b7f-7c27-4e0b-9321-e03a8e520b2a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 16:59:29 crc kubenswrapper[4853]: E1209 16:59:29.656935 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4p7jc" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" Dec 09 16:59:32 crc kubenswrapper[4853]: E1209 16:59:32.017864 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4p7jc" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" Dec 09 16:59:32 crc kubenswrapper[4853]: E1209 16:59:32.090082 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 16:59:32 crc kubenswrapper[4853]: E1209 16:59:32.090540 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5h7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vtlzd_openshift-marketplace(ed152681-91c0-40d0-be74-21f8e751080d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 16:59:32 crc kubenswrapper[4853]: E1209 16:59:32.091891 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vtlzd" podUID="ed152681-91c0-40d0-be74-21f8e751080d" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.080702 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vtlzd" podUID="ed152681-91c0-40d0-be74-21f8e751080d" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.129986 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.130467 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtg2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lg69z_openshift-marketplace(5baa3927-1796-48fe-9238-27f8717fbe89): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.131742 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lg69z" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.133764 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.133918 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrznq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8wf89_openshift-marketplace(c4e52886-10d9-49ec-8160-091f821e2cda): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.135063 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8wf89" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.137571 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.137684 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mvgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rsmkd_openshift-marketplace(058c275f-2ca6-45c0-9dd1-bcd8861c2fb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 16:59:33 crc kubenswrapper[4853]: E1209 16:59:33.138895 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rsmkd" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.445648 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.450314 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.451198 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: W1209 16:59:33.459835 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd7ac6fad_f04f_4d47_bc5f_75557864aff7.slice/crio-d5fe889f67fb7ba11f12e2e66f59daa7841c74f1a070c7584ed7467613d55649 WatchSource:0}: Error finding container d5fe889f67fb7ba11f12e2e66f59daa7841c74f1a070c7584ed7467613d55649: Status 404 returned error can't find the container with id d5fe889f67fb7ba11f12e2e66f59daa7841c74f1a070c7584ed7467613d55649 Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.464210 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.580101 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.580197 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61180e38-0858-499f-b0b1-1209a0a19ec2-kube-api-access\") pod \"installer-9-crc\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.580223 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-var-lock\") pod \"installer-9-crc\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.682029 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61180e38-0858-499f-b0b1-1209a0a19ec2-kube-api-access\") pod \"installer-9-crc\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.682090 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-var-lock\") pod \"installer-9-crc\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.682177 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.682251 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.682353 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-var-lock\") pod \"installer-9-crc\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.701146 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61180e38-0858-499f-b0b1-1209a0a19ec2-kube-api-access\") pod \"installer-9-crc\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:33 crc kubenswrapper[4853]: I1209 16:59:33.785123 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 16:59:34 crc kubenswrapper[4853]: I1209 16:59:34.088935 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d7ac6fad-f04f-4d47-bc5f-75557864aff7","Type":"ContainerStarted","Data":"41e6f9841560340203b703bdff063c2a4f558faa410d369b84e3392c3a72a2a4"} Dec 09 16:59:34 crc kubenswrapper[4853]: I1209 16:59:34.089353 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d7ac6fad-f04f-4d47-bc5f-75557864aff7","Type":"ContainerStarted","Data":"d5fe889f67fb7ba11f12e2e66f59daa7841c74f1a070c7584ed7467613d55649"} Dec 09 16:59:34 crc kubenswrapper[4853]: I1209 16:59:34.092776 4853 generic.go:334] "Generic (PLEG): container finished" podID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerID="018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa" exitCode=0 Dec 09 16:59:34 crc kubenswrapper[4853]: I1209 16:59:34.093138 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rszvg" event={"ID":"cf894738-9ac9-49cf-a5be-c4414628c89c","Type":"ContainerDied","Data":"018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa"} Dec 09 16:59:34 crc kubenswrapper[4853]: E1209 16:59:34.094841 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lg69z" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" Dec 09 16:59:34 crc kubenswrapper[4853]: E1209 16:59:34.096548 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rsmkd" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" Dec 09 16:59:34 crc kubenswrapper[4853]: E1209 16:59:34.097453 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8wf89" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" Dec 09 16:59:34 crc kubenswrapper[4853]: I1209 16:59:34.115959 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=6.115941378 podStartE2EDuration="6.115941378s" podCreationTimestamp="2025-12-09 16:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:59:34.107379218 +0000 UTC m=+201.042118410" watchObservedRunningTime="2025-12-09 16:59:34.115941378 +0000 UTC m=+201.050680560" Dec 09 16:59:34 crc kubenswrapper[4853]: I1209 16:59:34.176583 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 16:59:34 crc kubenswrapper[4853]: W1209 16:59:34.177756 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod61180e38_0858_499f_b0b1_1209a0a19ec2.slice/crio-f42c3227eb1762a72553e5e5ff443589db3966afebabdebd38cd704f00628582 WatchSource:0}: Error finding container f42c3227eb1762a72553e5e5ff443589db3966afebabdebd38cd704f00628582: Status 404 returned error can't find the container with id f42c3227eb1762a72553e5e5ff443589db3966afebabdebd38cd704f00628582 Dec 09 16:59:35 crc kubenswrapper[4853]: I1209 16:59:35.100440 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rszvg" event={"ID":"cf894738-9ac9-49cf-a5be-c4414628c89c","Type":"ContainerStarted","Data":"65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4"} Dec 09 16:59:35 crc kubenswrapper[4853]: I1209 16:59:35.102231 4853 generic.go:334] "Generic (PLEG): container finished" podID="d7ac6fad-f04f-4d47-bc5f-75557864aff7" containerID="41e6f9841560340203b703bdff063c2a4f558faa410d369b84e3392c3a72a2a4" exitCode=0 Dec 09 16:59:35 crc kubenswrapper[4853]: I1209 16:59:35.102404 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d7ac6fad-f04f-4d47-bc5f-75557864aff7","Type":"ContainerDied","Data":"41e6f9841560340203b703bdff063c2a4f558faa410d369b84e3392c3a72a2a4"} Dec 09 16:59:35 crc kubenswrapper[4853]: I1209 16:59:35.106391 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"61180e38-0858-499f-b0b1-1209a0a19ec2","Type":"ContainerStarted","Data":"09f6bcd37c9cce87fbbe0320b23db6e131e809f0846f3064de604c1f4b5e27a4"} Dec 09 16:59:35 crc kubenswrapper[4853]: I1209 16:59:35.106439 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"61180e38-0858-499f-b0b1-1209a0a19ec2","Type":"ContainerStarted","Data":"f42c3227eb1762a72553e5e5ff443589db3966afebabdebd38cd704f00628582"} Dec 09 16:59:35 crc kubenswrapper[4853]: I1209 16:59:35.120321 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rszvg" podStartSLOduration=2.1670437590000002 podStartE2EDuration="42.120303907s" podCreationTimestamp="2025-12-09 16:58:53 +0000 UTC" firstStartedPulling="2025-12-09 16:58:54.863188049 +0000 UTC m=+161.797927231" lastFinishedPulling="2025-12-09 16:59:34.816448197 +0000 UTC m=+201.751187379" observedRunningTime="2025-12-09 16:59:35.119048091 +0000 UTC m=+202.053787293" watchObservedRunningTime="2025-12-09 16:59:35.120303907 +0000 UTC m=+202.055043089" Dec 09 16:59:35 crc kubenswrapper[4853]: I1209 16:59:35.139649 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.139632189 podStartE2EDuration="2.139632189s" podCreationTimestamp="2025-12-09 16:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 16:59:35.135820801 +0000 UTC m=+202.070559973" watchObservedRunningTime="2025-12-09 16:59:35.139632189 +0000 UTC m=+202.074371371" Dec 09 16:59:36 crc kubenswrapper[4853]: I1209 16:59:36.433984 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:36 crc kubenswrapper[4853]: I1209 16:59:36.624427 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kubelet-dir\") pod \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\" (UID: \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\") " Dec 09 16:59:36 crc kubenswrapper[4853]: I1209 16:59:36.624532 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kube-api-access\") pod \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\" (UID: \"d7ac6fad-f04f-4d47-bc5f-75557864aff7\") " Dec 09 16:59:36 crc kubenswrapper[4853]: I1209 16:59:36.624646 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d7ac6fad-f04f-4d47-bc5f-75557864aff7" (UID: "d7ac6fad-f04f-4d47-bc5f-75557864aff7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 16:59:36 crc kubenswrapper[4853]: I1209 16:59:36.625041 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 16:59:36 crc kubenswrapper[4853]: I1209 16:59:36.636199 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7ac6fad-f04f-4d47-bc5f-75557864aff7" (UID: "d7ac6fad-f04f-4d47-bc5f-75557864aff7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:59:36 crc kubenswrapper[4853]: I1209 16:59:36.725975 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7ac6fad-f04f-4d47-bc5f-75557864aff7-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 16:59:37 crc kubenswrapper[4853]: I1209 16:59:37.119988 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d7ac6fad-f04f-4d47-bc5f-75557864aff7","Type":"ContainerDied","Data":"d5fe889f67fb7ba11f12e2e66f59daa7841c74f1a070c7584ed7467613d55649"} Dec 09 16:59:37 crc kubenswrapper[4853]: I1209 16:59:37.120275 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5fe889f67fb7ba11f12e2e66f59daa7841c74f1a070c7584ed7467613d55649" Dec 09 16:59:37 crc kubenswrapper[4853]: I1209 16:59:37.120060 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 16:59:40 crc kubenswrapper[4853]: I1209 16:59:40.091842 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5vb7d"] Dec 09 16:59:44 crc kubenswrapper[4853]: I1209 16:59:44.112344 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:59:44 crc kubenswrapper[4853]: I1209 16:59:44.112889 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:59:44 crc kubenswrapper[4853]: I1209 16:59:44.160414 4853 generic.go:334] "Generic (PLEG): container finished" podID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerID="0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1" exitCode=0 Dec 09 16:59:44 crc kubenswrapper[4853]: I1209 16:59:44.160463 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2fjn" event={"ID":"e2b524f3-9a6d-4d43-a023-3b8deee90128","Type":"ContainerDied","Data":"0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1"} Dec 09 16:59:44 crc kubenswrapper[4853]: I1209 16:59:44.195363 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:59:44 crc kubenswrapper[4853]: I1209 16:59:44.237336 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 16:59:45 crc kubenswrapper[4853]: I1209 16:59:45.166501 4853 generic.go:334] "Generic (PLEG): container finished" podID="539542f2-16c1-4479-814c-f39a282d6726" containerID="8ce41880b05bb04373888a3cb10aa199ac3175aaf9092f8e4a7d4d1eb7478ba0" exitCode=0 Dec 09 16:59:45 crc kubenswrapper[4853]: I1209 16:59:45.166722 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w4cc" event={"ID":"539542f2-16c1-4479-814c-f39a282d6726","Type":"ContainerDied","Data":"8ce41880b05bb04373888a3cb10aa199ac3175aaf9092f8e4a7d4d1eb7478ba0"} Dec 09 16:59:45 crc kubenswrapper[4853]: I1209 16:59:45.170907 4853 generic.go:334] "Generic (PLEG): container finished" podID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerID="53c4c6422b0ddaf5aa348720a23d5d817297f05eff3eb198f87bb57cde2010bc" exitCode=0 Dec 09 16:59:45 crc kubenswrapper[4853]: I1209 16:59:45.170964 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7jc" event={"ID":"d8795b7f-7c27-4e0b-9321-e03a8e520b2a","Type":"ContainerDied","Data":"53c4c6422b0ddaf5aa348720a23d5d817297f05eff3eb198f87bb57cde2010bc"} Dec 09 16:59:45 crc kubenswrapper[4853]: I1209 16:59:45.174967 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2fjn" event={"ID":"e2b524f3-9a6d-4d43-a023-3b8deee90128","Type":"ContainerStarted","Data":"ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a"} Dec 09 16:59:45 crc kubenswrapper[4853]: I1209 16:59:45.223482 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t2fjn" podStartSLOduration=1.854039596 podStartE2EDuration="55.223463239s" podCreationTimestamp="2025-12-09 16:58:50 +0000 UTC" firstStartedPulling="2025-12-09 16:58:51.580895228 +0000 UTC m=+158.515634410" lastFinishedPulling="2025-12-09 16:59:44.950318841 +0000 UTC m=+211.885058053" observedRunningTime="2025-12-09 16:59:45.218221102 +0000 UTC m=+212.152960304" watchObservedRunningTime="2025-12-09 16:59:45.223463239 +0000 UTC m=+212.158202431" Dec 09 16:59:46 crc kubenswrapper[4853]: I1209 16:59:46.181659 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7jc" event={"ID":"d8795b7f-7c27-4e0b-9321-e03a8e520b2a","Type":"ContainerStarted","Data":"c64f7dc73fd952cc9dee9a57106f28363c4a0411b5cd9dd8cb0b45293afcdc32"} Dec 09 16:59:46 crc kubenswrapper[4853]: I1209 16:59:46.186007 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w4cc" event={"ID":"539542f2-16c1-4479-814c-f39a282d6726","Type":"ContainerStarted","Data":"164ae28918e537c3ffaab26bf1f6198fed5284f6c8e0ed1805b7772d0878a746"} Dec 09 16:59:46 crc kubenswrapper[4853]: I1209 16:59:46.216535 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4p7jc" podStartSLOduration=2.170167148 podStartE2EDuration="55.216516981s" podCreationTimestamp="2025-12-09 16:58:51 +0000 UTC" firstStartedPulling="2025-12-09 16:58:52.632028229 +0000 UTC m=+159.566767411" lastFinishedPulling="2025-12-09 16:59:45.678378052 +0000 UTC m=+212.613117244" observedRunningTime="2025-12-09 16:59:46.215376158 +0000 UTC m=+213.150115330" watchObservedRunningTime="2025-12-09 16:59:46.216516981 +0000 UTC m=+213.151256163" Dec 09 16:59:46 crc kubenswrapper[4853]: I1209 16:59:46.234941 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6w4cc" podStartSLOduration=2.417956207 podStartE2EDuration="52.234927616s" podCreationTimestamp="2025-12-09 16:58:54 +0000 UTC" firstStartedPulling="2025-12-09 16:58:55.807156083 +0000 UTC m=+162.741895265" lastFinishedPulling="2025-12-09 16:59:45.624127492 +0000 UTC m=+212.558866674" observedRunningTime="2025-12-09 16:59:46.234400871 +0000 UTC m=+213.169140053" watchObservedRunningTime="2025-12-09 16:59:46.234927616 +0000 UTC m=+213.169666788" Dec 09 16:59:47 crc kubenswrapper[4853]: I1209 16:59:47.193649 4853 generic.go:334] "Generic (PLEG): container finished" podID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerID="244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415" exitCode=0 Dec 09 16:59:47 crc kubenswrapper[4853]: I1209 16:59:47.193695 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsmkd" event={"ID":"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5","Type":"ContainerDied","Data":"244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415"} Dec 09 16:59:48 crc kubenswrapper[4853]: I1209 16:59:48.201135 4853 generic.go:334] "Generic (PLEG): container finished" podID="ed152681-91c0-40d0-be74-21f8e751080d" containerID="4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531" exitCode=0 Dec 09 16:59:48 crc kubenswrapper[4853]: I1209 16:59:48.201205 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtlzd" event={"ID":"ed152681-91c0-40d0-be74-21f8e751080d","Type":"ContainerDied","Data":"4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531"} Dec 09 16:59:48 crc kubenswrapper[4853]: I1209 16:59:48.204772 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsmkd" event={"ID":"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5","Type":"ContainerStarted","Data":"6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da"} Dec 09 16:59:48 crc kubenswrapper[4853]: I1209 16:59:48.587268 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rsmkd" podStartSLOduration=2.622329191 podStartE2EDuration="55.587249776s" podCreationTimestamp="2025-12-09 16:58:53 +0000 UTC" firstStartedPulling="2025-12-09 16:58:54.742936556 +0000 UTC m=+161.677675738" lastFinishedPulling="2025-12-09 16:59:47.707857141 +0000 UTC m=+214.642596323" observedRunningTime="2025-12-09 16:59:48.262412799 +0000 UTC m=+215.197152001" watchObservedRunningTime="2025-12-09 16:59:48.587249776 +0000 UTC m=+215.521988958" Dec 09 16:59:49 crc kubenswrapper[4853]: I1209 16:59:49.218747 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtlzd" event={"ID":"ed152681-91c0-40d0-be74-21f8e751080d","Type":"ContainerStarted","Data":"b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400"} Dec 09 16:59:49 crc kubenswrapper[4853]: I1209 16:59:49.234432 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vtlzd" podStartSLOduration=1.914291026 podStartE2EDuration="59.2344188s" podCreationTimestamp="2025-12-09 16:58:50 +0000 UTC" firstStartedPulling="2025-12-09 16:58:51.621052995 +0000 UTC m=+158.555792177" lastFinishedPulling="2025-12-09 16:59:48.941180779 +0000 UTC m=+215.875919951" observedRunningTime="2025-12-09 16:59:49.232672941 +0000 UTC m=+216.167412123" watchObservedRunningTime="2025-12-09 16:59:49.2344188 +0000 UTC m=+216.169157982" Dec 09 16:59:50 crc kubenswrapper[4853]: I1209 16:59:50.225928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wf89" event={"ID":"c4e52886-10d9-49ec-8160-091f821e2cda","Type":"ContainerStarted","Data":"a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5"} Dec 09 16:59:50 crc kubenswrapper[4853]: I1209 16:59:50.804941 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:59:50 crc kubenswrapper[4853]: I1209 16:59:50.805244 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:59:50 crc kubenswrapper[4853]: I1209 16:59:50.863077 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.056481 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.056540 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.097554 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.233469 4853 generic.go:334] "Generic (PLEG): container finished" podID="c4e52886-10d9-49ec-8160-091f821e2cda" containerID="a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5" exitCode=0 Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.233545 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wf89" event={"ID":"c4e52886-10d9-49ec-8160-091f821e2cda","Type":"ContainerDied","Data":"a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5"} Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.241857 4853 generic.go:334] "Generic (PLEG): container finished" podID="5baa3927-1796-48fe-9238-27f8717fbe89" containerID="92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb" exitCode=0 Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.241963 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg69z" event={"ID":"5baa3927-1796-48fe-9238-27f8717fbe89","Type":"ContainerDied","Data":"92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb"} Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.298245 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t2fjn" Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.415833 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.415898 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:59:51 crc kubenswrapper[4853]: I1209 16:59:51.465718 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:59:52 crc kubenswrapper[4853]: I1209 16:59:52.249533 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg69z" event={"ID":"5baa3927-1796-48fe-9238-27f8717fbe89","Type":"ContainerStarted","Data":"e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2"} Dec 09 16:59:52 crc kubenswrapper[4853]: I1209 16:59:52.252954 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wf89" event={"ID":"c4e52886-10d9-49ec-8160-091f821e2cda","Type":"ContainerStarted","Data":"98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f"} Dec 09 16:59:52 crc kubenswrapper[4853]: I1209 16:59:52.268968 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lg69z" podStartSLOduration=2.9461093419999997 podStartE2EDuration="1m0.268949296s" podCreationTimestamp="2025-12-09 16:58:52 +0000 UTC" firstStartedPulling="2025-12-09 16:58:54.724262993 +0000 UTC m=+161.659002175" lastFinishedPulling="2025-12-09 16:59:52.047102947 +0000 UTC m=+218.981842129" observedRunningTime="2025-12-09 16:59:52.265569232 +0000 UTC m=+219.200308414" watchObservedRunningTime="2025-12-09 16:59:52.268949296 +0000 UTC m=+219.203688478" Dec 09 16:59:52 crc kubenswrapper[4853]: I1209 16:59:52.287977 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8wf89" podStartSLOduration=2.949921545 podStartE2EDuration="1m2.287956259s" podCreationTimestamp="2025-12-09 16:58:50 +0000 UTC" firstStartedPulling="2025-12-09 16:58:52.61979557 +0000 UTC m=+159.554534752" lastFinishedPulling="2025-12-09 16:59:51.957830274 +0000 UTC m=+218.892569466" observedRunningTime="2025-12-09 16:59:52.283885775 +0000 UTC m=+219.218624967" watchObservedRunningTime="2025-12-09 16:59:52.287956259 +0000 UTC m=+219.222695441" Dec 09 16:59:52 crc kubenswrapper[4853]: I1209 16:59:52.299640 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:59:53 crc kubenswrapper[4853]: I1209 16:59:53.032278 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:59:53 crc kubenswrapper[4853]: I1209 16:59:53.032620 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 16:59:53 crc kubenswrapper[4853]: I1209 16:59:53.403891 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:59:53 crc kubenswrapper[4853]: I1209 16:59:53.403950 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:59:53 crc kubenswrapper[4853]: I1209 16:59:53.440305 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:59:54 crc kubenswrapper[4853]: I1209 16:59:54.068943 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lg69z" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" containerName="registry-server" probeResult="failure" output=< Dec 09 16:59:54 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 16:59:54 crc kubenswrapper[4853]: > Dec 09 16:59:54 crc kubenswrapper[4853]: I1209 16:59:54.302188 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 16:59:54 crc kubenswrapper[4853]: I1209 16:59:54.450426 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:59:54 crc kubenswrapper[4853]: I1209 16:59:54.450465 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:59:54 crc kubenswrapper[4853]: I1209 16:59:54.516547 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:59:54 crc kubenswrapper[4853]: I1209 16:59:54.596318 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4p7jc"] Dec 09 16:59:54 crc kubenswrapper[4853]: I1209 16:59:54.596533 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4p7jc" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerName="registry-server" containerID="cri-o://c64f7dc73fd952cc9dee9a57106f28363c4a0411b5cd9dd8cb0b45293afcdc32" gracePeriod=2 Dec 09 16:59:55 crc kubenswrapper[4853]: I1209 16:59:55.323861 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.278377 4853 generic.go:334] "Generic (PLEG): container finished" podID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerID="c64f7dc73fd952cc9dee9a57106f28363c4a0411b5cd9dd8cb0b45293afcdc32" exitCode=0 Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.278475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7jc" event={"ID":"d8795b7f-7c27-4e0b-9321-e03a8e520b2a","Type":"ContainerDied","Data":"c64f7dc73fd952cc9dee9a57106f28363c4a0411b5cd9dd8cb0b45293afcdc32"} Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.502509 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.600423 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khwd6\" (UniqueName: \"kubernetes.io/projected/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-kube-api-access-khwd6\") pod \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.600506 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-utilities\") pod \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.600550 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-catalog-content\") pod \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\" (UID: \"d8795b7f-7c27-4e0b-9321-e03a8e520b2a\") " Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.601911 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-utilities" (OuterVolumeSpecName: "utilities") pod "d8795b7f-7c27-4e0b-9321-e03a8e520b2a" (UID: "d8795b7f-7c27-4e0b-9321-e03a8e520b2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.607462 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-kube-api-access-khwd6" (OuterVolumeSpecName: "kube-api-access-khwd6") pod "d8795b7f-7c27-4e0b-9321-e03a8e520b2a" (UID: "d8795b7f-7c27-4e0b-9321-e03a8e520b2a"). InnerVolumeSpecName "kube-api-access-khwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.703228 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khwd6\" (UniqueName: \"kubernetes.io/projected/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-kube-api-access-khwd6\") on node \"crc\" DevicePath \"\"" Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.703306 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.773404 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8795b7f-7c27-4e0b-9321-e03a8e520b2a" (UID: "d8795b7f-7c27-4e0b-9321-e03a8e520b2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.805240 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8795b7f-7c27-4e0b-9321-e03a8e520b2a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.995453 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsmkd"] Dec 09 16:59:56 crc kubenswrapper[4853]: I1209 16:59:56.995687 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rsmkd" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerName="registry-server" containerID="cri-o://6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da" gracePeriod=2 Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.197448 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6w4cc"] Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.285627 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7jc" event={"ID":"d8795b7f-7c27-4e0b-9321-e03a8e520b2a","Type":"ContainerDied","Data":"a50e7e54f63b7d8fb9308e83821e97e778a868c549d802fb554f19416c5acce5"} Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.285666 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p7jc" Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.285792 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6w4cc" podUID="539542f2-16c1-4479-814c-f39a282d6726" containerName="registry-server" containerID="cri-o://164ae28918e537c3ffaab26bf1f6198fed5284f6c8e0ed1805b7772d0878a746" gracePeriod=2 Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.285793 4853 scope.go:117] "RemoveContainer" containerID="c64f7dc73fd952cc9dee9a57106f28363c4a0411b5cd9dd8cb0b45293afcdc32" Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.320508 4853 scope.go:117] "RemoveContainer" containerID="53c4c6422b0ddaf5aa348720a23d5d817297f05eff3eb198f87bb57cde2010bc" Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.335765 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4p7jc"] Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.342310 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4p7jc"] Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.349262 4853 scope.go:117] "RemoveContainer" containerID="63ef67fd1feb6c5483e999921adb7a6946872dca6042ae1b08ef601d464789f8" Dec 09 16:59:57 crc kubenswrapper[4853]: I1209 16:59:57.577150 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" path="/var/lib/kubelet/pods/d8795b7f-7c27-4e0b-9321-e03a8e520b2a/volumes" Dec 09 16:59:58 crc kubenswrapper[4853]: I1209 16:59:58.595906 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 16:59:58 crc kubenswrapper[4853]: I1209 16:59:58.595965 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 16:59:58 crc kubenswrapper[4853]: I1209 16:59:58.596019 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 16:59:58 crc kubenswrapper[4853]: I1209 16:59:58.596540 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 16:59:58 crc kubenswrapper[4853]: I1209 16:59:58.596658 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6" gracePeriod=600 Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.146885 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68"] Dec 09 17:00:00 crc kubenswrapper[4853]: E1209 17:00:00.147740 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerName="extract-utilities" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.147764 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerName="extract-utilities" Dec 09 17:00:00 crc kubenswrapper[4853]: E1209 17:00:00.147829 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ac6fad-f04f-4d47-bc5f-75557864aff7" containerName="pruner" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.147886 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ac6fad-f04f-4d47-bc5f-75557864aff7" containerName="pruner" Dec 09 17:00:00 crc kubenswrapper[4853]: E1209 17:00:00.147941 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerName="registry-server" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.147959 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerName="registry-server" Dec 09 17:00:00 crc kubenswrapper[4853]: E1209 17:00:00.147988 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerName="extract-content" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.148001 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerName="extract-content" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.148248 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ac6fad-f04f-4d47-bc5f-75557864aff7" containerName="pruner" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.148278 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8795b7f-7c27-4e0b-9321-e03a8e520b2a" containerName="registry-server" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.149694 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.150789 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68"] Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.155490 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.155764 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.261515 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53665226-059a-426d-9d71-9ee8ecc2c2b4-config-volume\") pod \"collect-profiles-29421660-9mq68\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.261576 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53665226-059a-426d-9d71-9ee8ecc2c2b4-secret-volume\") pod \"collect-profiles-29421660-9mq68\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.261835 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.261895 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snbp9\" (UniqueName: \"kubernetes.io/projected/53665226-059a-426d-9d71-9ee8ecc2c2b4-kube-api-access-snbp9\") pod \"collect-profiles-29421660-9mq68\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.308172 4853 generic.go:334] "Generic (PLEG): container finished" podID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerID="6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da" exitCode=0 Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.308269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsmkd" event={"ID":"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5","Type":"ContainerDied","Data":"6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da"} Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.308489 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsmkd" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.308727 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsmkd" event={"ID":"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5","Type":"ContainerDied","Data":"291a284d2cb0eff0de851db6dc9d002327912238baa45c44c2bd14ece3e1b314"} Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.308766 4853 scope.go:117] "RemoveContainer" containerID="6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.318446 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6" exitCode=0 Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.318551 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6"} Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.321207 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w4cc_539542f2-16c1-4479-814c-f39a282d6726/registry-server/0.log" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.322183 4853 generic.go:334] "Generic (PLEG): container finished" podID="539542f2-16c1-4479-814c-f39a282d6726" containerID="164ae28918e537c3ffaab26bf1f6198fed5284f6c8e0ed1805b7772d0878a746" exitCode=137 Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.322220 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w4cc" event={"ID":"539542f2-16c1-4479-814c-f39a282d6726","Type":"ContainerDied","Data":"164ae28918e537c3ffaab26bf1f6198fed5284f6c8e0ed1805b7772d0878a746"} Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.327741 4853 scope.go:117] "RemoveContainer" containerID="244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.341168 4853 scope.go:117] "RemoveContainer" containerID="19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.361069 4853 scope.go:117] "RemoveContainer" containerID="6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da" Dec 09 17:00:00 crc kubenswrapper[4853]: E1209 17:00:00.361543 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da\": container with ID starting with 6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da not found: ID does not exist" containerID="6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.361613 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da"} err="failed to get container status \"6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da\": rpc error: code = NotFound desc = could not find container \"6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da\": container with ID starting with 6005f6b15c972f248325f720d49a2aebda9d21d9126dbcc2f0d9e287cbb4f7da not found: ID does not exist" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.361651 4853 scope.go:117] "RemoveContainer" containerID="244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415" Dec 09 17:00:00 crc kubenswrapper[4853]: E1209 17:00:00.362177 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415\": container with ID starting with 244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415 not found: ID does not exist" containerID="244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.362211 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415"} err="failed to get container status \"244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415\": rpc error: code = NotFound desc = could not find container \"244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415\": container with ID starting with 244ddff3904d83aff3f058a9d07a71b08d27bfae0c5d8aba59fbba9c9893d415 not found: ID does not exist" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.362239 4853 scope.go:117] "RemoveContainer" containerID="19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.362425 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-utilities\") pod \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.362517 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-catalog-content\") pod \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.362586 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mvgd\" (UniqueName: \"kubernetes.io/projected/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-kube-api-access-6mvgd\") pod \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\" (UID: \"058c275f-2ca6-45c0-9dd1-bcd8861c2fb5\") " Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.362798 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snbp9\" (UniqueName: \"kubernetes.io/projected/53665226-059a-426d-9d71-9ee8ecc2c2b4-kube-api-access-snbp9\") pod \"collect-profiles-29421660-9mq68\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.362846 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53665226-059a-426d-9d71-9ee8ecc2c2b4-config-volume\") pod \"collect-profiles-29421660-9mq68\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: E1209 17:00:00.362869 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a\": container with ID starting with 19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a not found: ID does not exist" containerID="19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.362913 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a"} err="failed to get container status \"19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a\": rpc error: code = NotFound desc = could not find container \"19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a\": container with ID starting with 19c6fb5d1b0f86a183deb8a5d6d067e52cd3969340b8f9b93690fb1a349c5f0a not found: ID does not exist" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.362879 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53665226-059a-426d-9d71-9ee8ecc2c2b4-secret-volume\") pod \"collect-profiles-29421660-9mq68\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.363555 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-utilities" (OuterVolumeSpecName: "utilities") pod "058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" (UID: "058c275f-2ca6-45c0-9dd1-bcd8861c2fb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.365254 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53665226-059a-426d-9d71-9ee8ecc2c2b4-config-volume\") pod \"collect-profiles-29421660-9mq68\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.368550 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53665226-059a-426d-9d71-9ee8ecc2c2b4-secret-volume\") pod \"collect-profiles-29421660-9mq68\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.370392 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-kube-api-access-6mvgd" (OuterVolumeSpecName: "kube-api-access-6mvgd") pod "058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" (UID: "058c275f-2ca6-45c0-9dd1-bcd8861c2fb5"). InnerVolumeSpecName "kube-api-access-6mvgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.381171 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snbp9\" (UniqueName: \"kubernetes.io/projected/53665226-059a-426d-9d71-9ee8ecc2c2b4-kube-api-access-snbp9\") pod \"collect-profiles-29421660-9mq68\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.392680 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" (UID: "058c275f-2ca6-45c0-9dd1-bcd8861c2fb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.397824 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w4cc_539542f2-16c1-4479-814c-f39a282d6726/registry-server/0.log" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.398554 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.463395 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn89c\" (UniqueName: \"kubernetes.io/projected/539542f2-16c1-4479-814c-f39a282d6726-kube-api-access-pn89c\") pod \"539542f2-16c1-4479-814c-f39a282d6726\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.463476 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-catalog-content\") pod \"539542f2-16c1-4479-814c-f39a282d6726\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.463583 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-utilities\") pod \"539542f2-16c1-4479-814c-f39a282d6726\" (UID: \"539542f2-16c1-4479-814c-f39a282d6726\") " Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.463771 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mvgd\" (UniqueName: \"kubernetes.io/projected/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-kube-api-access-6mvgd\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.463788 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.463797 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.464452 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-utilities" (OuterVolumeSpecName: "utilities") pod "539542f2-16c1-4479-814c-f39a282d6726" (UID: "539542f2-16c1-4479-814c-f39a282d6726"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.465784 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/539542f2-16c1-4479-814c-f39a282d6726-kube-api-access-pn89c" (OuterVolumeSpecName: "kube-api-access-pn89c") pod "539542f2-16c1-4479-814c-f39a282d6726" (UID: "539542f2-16c1-4479-814c-f39a282d6726"). InnerVolumeSpecName "kube-api-access-pn89c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.491686 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.564834 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.564866 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn89c\" (UniqueName: \"kubernetes.io/projected/539542f2-16c1-4479-814c-f39a282d6726-kube-api-access-pn89c\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.640560 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsmkd"] Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.646187 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsmkd"] Dec 09 17:00:00 crc kubenswrapper[4853]: I1209 17:00:00.920925 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68"] Dec 09 17:00:00 crc kubenswrapper[4853]: W1209 17:00:00.936817 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53665226_059a_426d_9d71_9ee8ecc2c2b4.slice/crio-8b9a9a6cabca02a0fa0b79ca471a434e100ebbb4076f9ffe44848da0ac669523 WatchSource:0}: Error finding container 8b9a9a6cabca02a0fa0b79ca471a434e100ebbb4076f9ffe44848da0ac669523: Status 404 returned error can't find the container with id 8b9a9a6cabca02a0fa0b79ca471a434e100ebbb4076f9ffe44848da0ac669523 Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.121380 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.288492 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8wf89" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.289560 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8wf89" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.331674 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" event={"ID":"53665226-059a-426d-9d71-9ee8ecc2c2b4","Type":"ContainerStarted","Data":"8b9a9a6cabca02a0fa0b79ca471a434e100ebbb4076f9ffe44848da0ac669523"} Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.340325 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8wf89" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.343294 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w4cc_539542f2-16c1-4479-814c-f39a282d6726/registry-server/0.log" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.344306 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w4cc" event={"ID":"539542f2-16c1-4479-814c-f39a282d6726","Type":"ContainerDied","Data":"10fd1029d6c9769b3f1ad4771202c0e757a9c6ded3e89b4ea8a1459e6fa3b3b1"} Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.344450 4853 scope.go:117] "RemoveContainer" containerID="164ae28918e537c3ffaab26bf1f6198fed5284f6c8e0ed1805b7772d0878a746" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.345209 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w4cc" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.363488 4853 scope.go:117] "RemoveContainer" containerID="8ce41880b05bb04373888a3cb10aa199ac3175aaf9092f8e4a7d4d1eb7478ba0" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.376948 4853 scope.go:117] "RemoveContainer" containerID="d4027e8b85f948e81f5f6fed0fa960f0e1f602bb8d3e2b823f8d6cbc6b73c005" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.397313 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8wf89" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.574552 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" path="/var/lib/kubelet/pods/058c275f-2ca6-45c0-9dd1-bcd8861c2fb5/volumes" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.879354 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "539542f2-16c1-4479-814c-f39a282d6726" (UID: "539542f2-16c1-4479-814c-f39a282d6726"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.882967 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539542f2-16c1-4479-814c-f39a282d6726-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.969676 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6w4cc"] Dec 09 17:00:01 crc kubenswrapper[4853]: I1209 17:00:01.972758 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6w4cc"] Dec 09 17:00:02 crc kubenswrapper[4853]: I1209 17:00:02.353046 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" event={"ID":"53665226-059a-426d-9d71-9ee8ecc2c2b4","Type":"ContainerStarted","Data":"89a3c70c364dc675473593eb5e6b97266dc1ca7566df2838259c02000211c6e8"} Dec 09 17:00:02 crc kubenswrapper[4853]: I1209 17:00:02.356126 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"e9090375773fe6a3dc05961f24a074b45b6f59cd6d5e586b7e14cdea2d22dac4"} Dec 09 17:00:02 crc kubenswrapper[4853]: I1209 17:00:02.382010 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" podStartSLOduration=2.381984566 podStartE2EDuration="2.381984566s" podCreationTimestamp="2025-12-09 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:00:02.373044636 +0000 UTC m=+229.307783818" watchObservedRunningTime="2025-12-09 17:00:02.381984566 +0000 UTC m=+229.316723768" Dec 09 17:00:03 crc kubenswrapper[4853]: I1209 17:00:03.077276 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 17:00:03 crc kubenswrapper[4853]: I1209 17:00:03.118055 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 17:00:03 crc kubenswrapper[4853]: I1209 17:00:03.361449 4853 generic.go:334] "Generic (PLEG): container finished" podID="53665226-059a-426d-9d71-9ee8ecc2c2b4" containerID="89a3c70c364dc675473593eb5e6b97266dc1ca7566df2838259c02000211c6e8" exitCode=0 Dec 09 17:00:03 crc kubenswrapper[4853]: I1209 17:00:03.362408 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" event={"ID":"53665226-059a-426d-9d71-9ee8ecc2c2b4","Type":"ContainerDied","Data":"89a3c70c364dc675473593eb5e6b97266dc1ca7566df2838259c02000211c6e8"} Dec 09 17:00:03 crc kubenswrapper[4853]: I1209 17:00:03.573960 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="539542f2-16c1-4479-814c-f39a282d6726" path="/var/lib/kubelet/pods/539542f2-16c1-4479-814c-f39a282d6726/volumes" Dec 09 17:00:03 crc kubenswrapper[4853]: I1209 17:00:03.995625 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8wf89"] Dec 09 17:00:03 crc kubenswrapper[4853]: I1209 17:00:03.995845 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8wf89" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" containerName="registry-server" containerID="cri-o://98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f" gracePeriod=2 Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.361771 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wf89" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.367534 4853 generic.go:334] "Generic (PLEG): container finished" podID="c4e52886-10d9-49ec-8160-091f821e2cda" containerID="98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f" exitCode=0 Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.367574 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wf89" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.367578 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wf89" event={"ID":"c4e52886-10d9-49ec-8160-091f821e2cda","Type":"ContainerDied","Data":"98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f"} Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.367691 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wf89" event={"ID":"c4e52886-10d9-49ec-8160-091f821e2cda","Type":"ContainerDied","Data":"5ed98101be27e0b055a714fc180367a479b86c7b85f1f08cb5a39564d33abe47"} Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.367716 4853 scope.go:117] "RemoveContainer" containerID="98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.409903 4853 scope.go:117] "RemoveContainer" containerID="a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.416200 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-utilities\") pod \"c4e52886-10d9-49ec-8160-091f821e2cda\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.416470 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-catalog-content\") pod \"c4e52886-10d9-49ec-8160-091f821e2cda\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.416538 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrznq\" (UniqueName: \"kubernetes.io/projected/c4e52886-10d9-49ec-8160-091f821e2cda-kube-api-access-wrznq\") pod \"c4e52886-10d9-49ec-8160-091f821e2cda\" (UID: \"c4e52886-10d9-49ec-8160-091f821e2cda\") " Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.417026 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-utilities" (OuterVolumeSpecName: "utilities") pod "c4e52886-10d9-49ec-8160-091f821e2cda" (UID: "c4e52886-10d9-49ec-8160-091f821e2cda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.447954 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e52886-10d9-49ec-8160-091f821e2cda-kube-api-access-wrznq" (OuterVolumeSpecName: "kube-api-access-wrznq") pod "c4e52886-10d9-49ec-8160-091f821e2cda" (UID: "c4e52886-10d9-49ec-8160-091f821e2cda"). InnerVolumeSpecName "kube-api-access-wrznq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.466768 4853 scope.go:117] "RemoveContainer" containerID="5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.479183 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4e52886-10d9-49ec-8160-091f821e2cda" (UID: "c4e52886-10d9-49ec-8160-091f821e2cda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.492354 4853 scope.go:117] "RemoveContainer" containerID="98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f" Dec 09 17:00:04 crc kubenswrapper[4853]: E1209 17:00:04.497002 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f\": container with ID starting with 98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f not found: ID does not exist" containerID="98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.497047 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f"} err="failed to get container status \"98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f\": rpc error: code = NotFound desc = could not find container \"98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f\": container with ID starting with 98db548dbbfc6249d6018cb86c48ea36fa818dc76a6d827aa080ae75b145264f not found: ID does not exist" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.497083 4853 scope.go:117] "RemoveContainer" containerID="a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5" Dec 09 17:00:04 crc kubenswrapper[4853]: E1209 17:00:04.507560 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5\": container with ID starting with a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5 not found: ID does not exist" containerID="a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.507635 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5"} err="failed to get container status \"a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5\": rpc error: code = NotFound desc = could not find container \"a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5\": container with ID starting with a9ddc65ee864753713c0b04e77b18e11c37a955c3f4d6cf0a8226c3e84417bc5 not found: ID does not exist" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.507672 4853 scope.go:117] "RemoveContainer" containerID="5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b" Dec 09 17:00:04 crc kubenswrapper[4853]: E1209 17:00:04.509003 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b\": container with ID starting with 5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b not found: ID does not exist" containerID="5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.509029 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b"} err="failed to get container status \"5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b\": rpc error: code = NotFound desc = could not find container \"5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b\": container with ID starting with 5ad38511615dd7830cd46b11f7db2f5a09d1cc084f79371e9b118af6cab4996b not found: ID does not exist" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.517664 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.517694 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e52886-10d9-49ec-8160-091f821e2cda-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.517707 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrznq\" (UniqueName: \"kubernetes.io/projected/c4e52886-10d9-49ec-8160-091f821e2cda-kube-api-access-wrznq\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.646721 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.699325 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8wf89"] Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.701970 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8wf89"] Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.719913 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53665226-059a-426d-9d71-9ee8ecc2c2b4-config-volume\") pod \"53665226-059a-426d-9d71-9ee8ecc2c2b4\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.719965 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snbp9\" (UniqueName: \"kubernetes.io/projected/53665226-059a-426d-9d71-9ee8ecc2c2b4-kube-api-access-snbp9\") pod \"53665226-059a-426d-9d71-9ee8ecc2c2b4\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.720026 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53665226-059a-426d-9d71-9ee8ecc2c2b4-secret-volume\") pod \"53665226-059a-426d-9d71-9ee8ecc2c2b4\" (UID: \"53665226-059a-426d-9d71-9ee8ecc2c2b4\") " Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.721228 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53665226-059a-426d-9d71-9ee8ecc2c2b4-config-volume" (OuterVolumeSpecName: "config-volume") pod "53665226-059a-426d-9d71-9ee8ecc2c2b4" (UID: "53665226-059a-426d-9d71-9ee8ecc2c2b4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.723386 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53665226-059a-426d-9d71-9ee8ecc2c2b4-kube-api-access-snbp9" (OuterVolumeSpecName: "kube-api-access-snbp9") pod "53665226-059a-426d-9d71-9ee8ecc2c2b4" (UID: "53665226-059a-426d-9d71-9ee8ecc2c2b4"). InnerVolumeSpecName "kube-api-access-snbp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.723716 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53665226-059a-426d-9d71-9ee8ecc2c2b4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "53665226-059a-426d-9d71-9ee8ecc2c2b4" (UID: "53665226-059a-426d-9d71-9ee8ecc2c2b4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.821039 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53665226-059a-426d-9d71-9ee8ecc2c2b4-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.821072 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53665226-059a-426d-9d71-9ee8ecc2c2b4-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:04 crc kubenswrapper[4853]: I1209 17:00:04.821083 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snbp9\" (UniqueName: \"kubernetes.io/projected/53665226-059a-426d-9d71-9ee8ecc2c2b4-kube-api-access-snbp9\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.122632 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" podUID="7f4d7735-71ec-48b9-b4dc-017a983a2e2c" containerName="oauth-openshift" containerID="cri-o://a5a4bd22eff497fbd250f3bbce8ec254c64891a45b192d774c28e3f082c7d101" gracePeriod=15 Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.390250 4853 generic.go:334] "Generic (PLEG): container finished" podID="7f4d7735-71ec-48b9-b4dc-017a983a2e2c" containerID="a5a4bd22eff497fbd250f3bbce8ec254c64891a45b192d774c28e3f082c7d101" exitCode=0 Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.390301 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" event={"ID":"7f4d7735-71ec-48b9-b4dc-017a983a2e2c","Type":"ContainerDied","Data":"a5a4bd22eff497fbd250f3bbce8ec254c64891a45b192d774c28e3f082c7d101"} Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.392980 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" event={"ID":"53665226-059a-426d-9d71-9ee8ecc2c2b4","Type":"ContainerDied","Data":"8b9a9a6cabca02a0fa0b79ca471a434e100ebbb4076f9ffe44848da0ac669523"} Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.393010 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b9a9a6cabca02a0fa0b79ca471a434e100ebbb4076f9ffe44848da0ac669523" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.393068 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.573161 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" path="/var/lib/kubelet/pods/c4e52886-10d9-49ec-8160-091f821e2cda/volumes" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.584174 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637452 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-ocp-branding-template\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637494 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-idp-0-file-data\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637514 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-cliconfig\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637557 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-trusted-ca-bundle\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637620 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-login\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637645 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-provider-selection\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637669 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-policies\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637694 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-service-ca\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637727 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-serving-cert\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637769 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-dir\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637786 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-router-certs\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637810 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-error\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637830 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-session\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.637856 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b62fx\" (UniqueName: \"kubernetes.io/projected/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-kube-api-access-b62fx\") pod \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\" (UID: \"7f4d7735-71ec-48b9-b4dc-017a983a2e2c\") " Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.639067 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.639162 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.639854 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.641367 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.642162 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.642547 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.644392 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.646662 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.647773 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.652390 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-kube-api-access-b62fx" (OuterVolumeSpecName: "kube-api-access-b62fx") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "kube-api-access-b62fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.652670 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.657952 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.660936 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.662294 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7f4d7735-71ec-48b9-b4dc-017a983a2e2c" (UID: "7f4d7735-71ec-48b9-b4dc-017a983a2e2c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739101 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b62fx\" (UniqueName: \"kubernetes.io/projected/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-kube-api-access-b62fx\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739142 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739159 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739172 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739185 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739198 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739215 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739230 4853 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739241 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739254 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739266 4853 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739282 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739294 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:05 crc kubenswrapper[4853]: I1209 17:00:05.739305 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7f4d7735-71ec-48b9-b4dc-017a983a2e2c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:06 crc kubenswrapper[4853]: I1209 17:00:06.400120 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" event={"ID":"7f4d7735-71ec-48b9-b4dc-017a983a2e2c","Type":"ContainerDied","Data":"d30cc3e7ba72447a87486e701f98c9fccc33e94f665487c6b786e826ba7fcfbb"} Dec 09 17:00:06 crc kubenswrapper[4853]: I1209 17:00:06.400207 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5vb7d" Dec 09 17:00:06 crc kubenswrapper[4853]: I1209 17:00:06.400438 4853 scope.go:117] "RemoveContainer" containerID="a5a4bd22eff497fbd250f3bbce8ec254c64891a45b192d774c28e3f082c7d101" Dec 09 17:00:06 crc kubenswrapper[4853]: I1209 17:00:06.446898 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5vb7d"] Dec 09 17:00:06 crc kubenswrapper[4853]: I1209 17:00:06.450726 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5vb7d"] Dec 09 17:00:07 crc kubenswrapper[4853]: I1209 17:00:07.576022 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4d7735-71ec-48b9-b4dc-017a983a2e2c" path="/var/lib/kubelet/pods/7f4d7735-71ec-48b9-b4dc-017a983a2e2c/volumes" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.124258 4853 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.124900 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af" gracePeriod=15 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.124972 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b" gracePeriod=15 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.125077 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2" gracePeriod=15 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.125135 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300" gracePeriod=15 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.125149 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48" gracePeriod=15 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126115 4853 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126324 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" containerName="extract-content" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126523 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" containerName="extract-content" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126537 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerName="extract-content" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126544 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerName="extract-content" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126553 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53665226-059a-426d-9d71-9ee8ecc2c2b4" containerName="collect-profiles" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126559 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="53665226-059a-426d-9d71-9ee8ecc2c2b4" containerName="collect-profiles" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126566 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerName="extract-utilities" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126572 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerName="extract-utilities" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126582 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539542f2-16c1-4479-814c-f39a282d6726" containerName="extract-content" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126588 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="539542f2-16c1-4479-814c-f39a282d6726" containerName="extract-content" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126613 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" containerName="extract-utilities" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126620 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" containerName="extract-utilities" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126628 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539542f2-16c1-4479-814c-f39a282d6726" containerName="registry-server" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126634 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="539542f2-16c1-4479-814c-f39a282d6726" containerName="registry-server" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126641 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126648 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126655 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4d7735-71ec-48b9-b4dc-017a983a2e2c" containerName="oauth-openshift" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126662 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4d7735-71ec-48b9-b4dc-017a983a2e2c" containerName="oauth-openshift" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126671 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" containerName="registry-server" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126676 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" containerName="registry-server" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126685 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerName="registry-server" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126691 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerName="registry-server" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126697 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126702 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126710 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126716 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126724 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539542f2-16c1-4479-814c-f39a282d6726" containerName="extract-utilities" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126730 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="539542f2-16c1-4479-814c-f39a282d6726" containerName="extract-utilities" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126738 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126744 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126750 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126756 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126764 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126770 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.126778 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126785 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126875 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126885 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126895 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4d7735-71ec-48b9-b4dc-017a983a2e2c" containerName="oauth-openshift" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126905 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126911 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126928 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="539542f2-16c1-4479-814c-f39a282d6726" containerName="registry-server" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126936 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126943 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126951 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="058c275f-2ca6-45c0-9dd1-bcd8861c2fb5" containerName="registry-server" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126958 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e52886-10d9-49ec-8160-091f821e2cda" containerName="registry-server" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.126966 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="53665226-059a-426d-9d71-9ee8ecc2c2b4" containerName="collect-profiles" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.128911 4853 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.130268 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.134369 4853 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.170506 4853 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.213922 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.213977 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.214008 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.214043 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.214071 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.214094 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.214118 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.214174 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.315844 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.315894 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.315936 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.315966 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.315986 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316006 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316029 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316045 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316112 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316157 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316125 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316178 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316175 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316163 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316019 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.316201 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.448836 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.450157 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.451853 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300" exitCode=0 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.451886 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b" exitCode=0 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.451894 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2" exitCode=0 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.451902 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48" exitCode=2 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.451938 4853 scope.go:117] "RemoveContainer" containerID="2c0e8dbf7789360fce037c8a05eec97b1766db7260926923a3d07cb781f44cfd" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.453701 4853 generic.go:334] "Generic (PLEG): container finished" podID="61180e38-0858-499f-b0b1-1209a0a19ec2" containerID="09f6bcd37c9cce87fbbe0320b23db6e131e809f0846f3064de604c1f4b5e27a4" exitCode=0 Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.453742 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"61180e38-0858-499f-b0b1-1209a0a19ec2","Type":"ContainerDied","Data":"09f6bcd37c9cce87fbbe0320b23db6e131e809f0846f3064de604c1f4b5e27a4"} Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.454655 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:12 crc kubenswrapper[4853]: I1209 17:00:12.472087 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:12 crc kubenswrapper[4853]: E1209 17:00:12.496296 4853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f9aa504f18620 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 17:00:12.495742496 +0000 UTC m=+239.430481678,LastTimestamp:2025-12-09 17:00:12.495742496 +0000 UTC m=+239.430481678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.459650 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b"} Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.460032 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"112558fa3b6b228fcfc0695d5c9dd02134c3eed41bce573fa639e8702cbd8be4"} Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.460720 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:13 crc kubenswrapper[4853]: E1209 17:00:13.460863 4853 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.463123 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.572088 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.693128 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.693913 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.829865 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-kubelet-dir\") pod \"61180e38-0858-499f-b0b1-1209a0a19ec2\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.829927 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-var-lock\") pod \"61180e38-0858-499f-b0b1-1209a0a19ec2\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.829960 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61180e38-0858-499f-b0b1-1209a0a19ec2-kube-api-access\") pod \"61180e38-0858-499f-b0b1-1209a0a19ec2\" (UID: \"61180e38-0858-499f-b0b1-1209a0a19ec2\") " Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.829980 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "61180e38-0858-499f-b0b1-1209a0a19ec2" (UID: "61180e38-0858-499f-b0b1-1209a0a19ec2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.830092 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-var-lock" (OuterVolumeSpecName: "var-lock") pod "61180e38-0858-499f-b0b1-1209a0a19ec2" (UID: "61180e38-0858-499f-b0b1-1209a0a19ec2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.830346 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.830372 4853 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/61180e38-0858-499f-b0b1-1209a0a19ec2-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.835188 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61180e38-0858-499f-b0b1-1209a0a19ec2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "61180e38-0858-499f-b0b1-1209a0a19ec2" (UID: "61180e38-0858-499f-b0b1-1209a0a19ec2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:00:13 crc kubenswrapper[4853]: I1209 17:00:13.931460 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/61180e38-0858-499f-b0b1-1209a0a19ec2-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.470341 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"61180e38-0858-499f-b0b1-1209a0a19ec2","Type":"ContainerDied","Data":"f42c3227eb1762a72553e5e5ff443589db3966afebabdebd38cd704f00628582"} Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.471473 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42c3227eb1762a72553e5e5ff443589db3966afebabdebd38cd704f00628582" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.470358 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.473356 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.474211 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af" exitCode=0 Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.495866 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.498867 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.500374 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.501208 4853 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.501585 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.641759 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.641870 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.641916 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.641977 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.642007 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.642070 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.642442 4853 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.642463 4853 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:14 crc kubenswrapper[4853]: I1209 17:00:14.642474 4853 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.487167 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.488066 4853 scope.go:117] "RemoveContainer" containerID="3e4c3154ac835f98fc1077c66452383a797f86f2d9290beee93a64a1bc9df300" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.488465 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.503429 4853 scope.go:117] "RemoveContainer" containerID="4c32f454e6508b997a448526c0eb035f60501d3b73e45a4866f38bd0a324eb2b" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.510343 4853 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.511509 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.520882 4853 scope.go:117] "RemoveContainer" containerID="f957f78a4d08e10d4b18d91cd4a9ad2ef52e4e505c7ebf77e342662f40622fc2" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.534941 4853 scope.go:117] "RemoveContainer" containerID="a244d8a1e4dd7dfb4366ade2268f5be1408256c8421c68ed7e31b0f4ea1bad48" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.548838 4853 scope.go:117] "RemoveContainer" containerID="78934b917df1d7e795e4463a07c8e25bb3181934d365f7545a6a773fe99f65af" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.566518 4853 scope.go:117] "RemoveContainer" containerID="17637237c6ca16f8f2d3033a797960aaba168f28fedd03019709423da434cc89" Dec 09 17:00:15 crc kubenswrapper[4853]: I1209 17:00:15.573532 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 09 17:00:18 crc kubenswrapper[4853]: E1209 17:00:18.653882 4853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f9aa504f18620 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 17:00:12.495742496 +0000 UTC m=+239.430481678,LastTimestamp:2025-12-09 17:00:12.495742496 +0000 UTC m=+239.430481678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 17:00:18 crc kubenswrapper[4853]: E1209 17:00:18.725450 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:18 crc kubenswrapper[4853]: E1209 17:00:18.725778 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:18 crc kubenswrapper[4853]: E1209 17:00:18.726265 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:18 crc kubenswrapper[4853]: E1209 17:00:18.726845 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:18 crc kubenswrapper[4853]: E1209 17:00:18.727354 4853 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:18 crc kubenswrapper[4853]: I1209 17:00:18.727397 4853 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 09 17:00:18 crc kubenswrapper[4853]: E1209 17:00:18.727769 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Dec 09 17:00:18 crc kubenswrapper[4853]: E1209 17:00:18.929501 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Dec 09 17:00:19 crc kubenswrapper[4853]: E1209 17:00:19.331414 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Dec 09 17:00:20 crc kubenswrapper[4853]: E1209 17:00:20.133257 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Dec 09 17:00:21 crc kubenswrapper[4853]: E1209 17:00:21.733838 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="3.2s" Dec 09 17:00:22 crc kubenswrapper[4853]: I1209 17:00:22.566742 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:22 crc kubenswrapper[4853]: I1209 17:00:22.567728 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:22 crc kubenswrapper[4853]: I1209 17:00:22.583426 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:22 crc kubenswrapper[4853]: I1209 17:00:22.583452 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:22 crc kubenswrapper[4853]: E1209 17:00:22.583692 4853 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:22 crc kubenswrapper[4853]: I1209 17:00:22.584170 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:23 crc kubenswrapper[4853]: I1209 17:00:23.534279 4853 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e4e024057a1c4e4e6630084e46a4f592cef8c3ad8d86e07047c4fd8e728abac9" exitCode=0 Dec 09 17:00:23 crc kubenswrapper[4853]: I1209 17:00:23.534388 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e4e024057a1c4e4e6630084e46a4f592cef8c3ad8d86e07047c4fd8e728abac9"} Dec 09 17:00:23 crc kubenswrapper[4853]: I1209 17:00:23.534513 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"475cc8894e08a70d6575fdf2cf7235db496f51efbccd2c1dda2f30e4ae38584a"} Dec 09 17:00:23 crc kubenswrapper[4853]: I1209 17:00:23.534754 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:23 crc kubenswrapper[4853]: I1209 17:00:23.534766 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:23 crc kubenswrapper[4853]: E1209 17:00:23.535218 4853 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:23 crc kubenswrapper[4853]: I1209 17:00:23.535291 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:23 crc kubenswrapper[4853]: I1209 17:00:23.582321 4853 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:23 crc kubenswrapper[4853]: I1209 17:00:23.582708 4853 status_manager.go:851] "Failed to get status for pod" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Dec 09 17:00:24 crc kubenswrapper[4853]: I1209 17:00:24.545434 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8096c4b2189e30677791f68b99797f50239e4e1eeb20f93165f032925f23bb0a"} Dec 09 17:00:24 crc kubenswrapper[4853]: I1209 17:00:24.546443 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fd72a08df804a8f3be72c090baeaa89873342346dc24758a2a950960b273b262"} Dec 09 17:00:24 crc kubenswrapper[4853]: I1209 17:00:24.546534 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d30b3f7bf722c0b85a2f5e65ccf9b7cb9ae462affcb2d5e64a386741de854791"} Dec 09 17:00:24 crc kubenswrapper[4853]: I1209 17:00:24.546697 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58c4c1543aff72737711a12843075ddae0eec8ca933b811433ad06f1c96b4139"} Dec 09 17:00:25 crc kubenswrapper[4853]: I1209 17:00:25.559737 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8ab7ffac65116b7f01522cdf97a0e7a1784d39f1a0cc37af363e8dd1d2aadcd4"} Dec 09 17:00:25 crc kubenswrapper[4853]: I1209 17:00:25.559945 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:25 crc kubenswrapper[4853]: I1209 17:00:25.560067 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:25 crc kubenswrapper[4853]: I1209 17:00:25.560091 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:27 crc kubenswrapper[4853]: I1209 17:00:27.578299 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 17:00:27 crc kubenswrapper[4853]: I1209 17:00:27.578751 4853 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c" exitCode=1 Dec 09 17:00:27 crc kubenswrapper[4853]: I1209 17:00:27.579488 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c"} Dec 09 17:00:27 crc kubenswrapper[4853]: I1209 17:00:27.580161 4853 scope.go:117] "RemoveContainer" containerID="69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c" Dec 09 17:00:27 crc kubenswrapper[4853]: I1209 17:00:27.584306 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:27 crc kubenswrapper[4853]: I1209 17:00:27.584371 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:27 crc kubenswrapper[4853]: I1209 17:00:27.592284 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:28 crc kubenswrapper[4853]: I1209 17:00:28.590376 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 17:00:28 crc kubenswrapper[4853]: I1209 17:00:28.590715 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1876e9489fe62dcdddb5169877a4721272cddd06da316af6a508e6d9036b0b15"} Dec 09 17:00:28 crc kubenswrapper[4853]: I1209 17:00:28.840939 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 17:00:29 crc kubenswrapper[4853]: I1209 17:00:29.796995 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 17:00:29 crc kubenswrapper[4853]: I1209 17:00:29.797366 4853 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 17:00:29 crc kubenswrapper[4853]: I1209 17:00:29.797652 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 17:00:30 crc kubenswrapper[4853]: I1209 17:00:30.571516 4853 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:30 crc kubenswrapper[4853]: I1209 17:00:30.598311 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc99e099-5890-4938-9a22-2d2df9bb57e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T17:00:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T17:00:23Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T17:00:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T17:00:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c4c1543aff72737711a12843075ddae0eec8ca933b811433ad06f1c96b4139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T17:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd72a08df804a8f3be72c090baeaa89873342346dc24758a2a950960b273b262\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30b3f7bf722c0b85a2f5e65ccf9b7cb9ae462affcb2d5e64a386741de854791\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ab7ffac65116b7f01522cdf97a0e7a1784d39f1a0cc37af363e8dd1d2aadcd4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T17:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8096c4b2189e30677791f68b99797f50239e4e1eeb20f93165f032925f23bb0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T17:00:24Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e024057a1c4e4e6630084e46a4f592cef8c3ad8d86e07047c4fd8e728abac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e024057a1c4e4e6630084e46a4f592cef8c3ad8d86e07047c4fd8e728abac9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T17:00:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T17:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"dc99e099-5890-4938-9a22-2d2df9bb57e0\": field is immutable" Dec 09 17:00:30 crc kubenswrapper[4853]: I1209 17:00:30.602138 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:30 crc kubenswrapper[4853]: I1209 17:00:30.602199 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:30 crc kubenswrapper[4853]: I1209 17:00:30.607383 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:30 crc kubenswrapper[4853]: I1209 17:00:30.636088 4853 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="71262499-eef7-4979-99aa-8b880c716bc8" Dec 09 17:00:31 crc kubenswrapper[4853]: I1209 17:00:31.606924 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:31 crc kubenswrapper[4853]: I1209 17:00:31.607238 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:31 crc kubenswrapper[4853]: I1209 17:00:31.609841 4853 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="71262499-eef7-4979-99aa-8b880c716bc8" Dec 09 17:00:37 crc kubenswrapper[4853]: I1209 17:00:37.458494 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 17:00:37 crc kubenswrapper[4853]: I1209 17:00:37.719794 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 17:00:38 crc kubenswrapper[4853]: I1209 17:00:38.097428 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 17:00:38 crc kubenswrapper[4853]: I1209 17:00:38.353847 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 17:00:38 crc kubenswrapper[4853]: I1209 17:00:38.445987 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 17:00:38 crc kubenswrapper[4853]: I1209 17:00:38.550659 4853 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 17:00:38 crc kubenswrapper[4853]: I1209 17:00:38.846818 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 17:00:38 crc kubenswrapper[4853]: I1209 17:00:38.895170 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 17:00:39 crc kubenswrapper[4853]: I1209 17:00:39.054118 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 17:00:39 crc kubenswrapper[4853]: I1209 17:00:39.055025 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 17:00:39 crc kubenswrapper[4853]: I1209 17:00:39.201086 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 17:00:39 crc kubenswrapper[4853]: I1209 17:00:39.356180 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 17:00:39 crc kubenswrapper[4853]: I1209 17:00:39.591214 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 17:00:39 crc kubenswrapper[4853]: I1209 17:00:39.797095 4853 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 17:00:39 crc kubenswrapper[4853]: I1209 17:00:39.797184 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 17:00:40 crc kubenswrapper[4853]: I1209 17:00:40.045166 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 17:00:40 crc kubenswrapper[4853]: I1209 17:00:40.571928 4853 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 17:00:40 crc kubenswrapper[4853]: I1209 17:00:40.606966 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 17:00:40 crc kubenswrapper[4853]: I1209 17:00:40.645446 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 17:00:40 crc kubenswrapper[4853]: I1209 17:00:40.965996 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 17:00:41 crc kubenswrapper[4853]: I1209 17:00:41.226171 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 17:00:41 crc kubenswrapper[4853]: I1209 17:00:41.235887 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 17:00:41 crc kubenswrapper[4853]: I1209 17:00:41.388116 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 17:00:41 crc kubenswrapper[4853]: I1209 17:00:41.879256 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 17:00:41 crc kubenswrapper[4853]: I1209 17:00:41.955875 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 17:00:41 crc kubenswrapper[4853]: I1209 17:00:41.978477 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.693724 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.710302 4853 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.717967 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.718059 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l"] Dec 09 17:00:42 crc kubenswrapper[4853]: E1209 17:00:42.718401 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" containerName="installer" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.718440 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" containerName="installer" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.718446 4853 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.718475 4853 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc99e099-5890-4938-9a22-2d2df9bb57e0" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.718694 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="61180e38-0858-499f-b0b1-1209a0a19ec2" containerName="installer" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.719018 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.719251 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.721315 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.723636 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.723883 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.724207 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.724332 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.724430 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.724693 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.724342 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.726119 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.726204 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.727713 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.731300 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.732040 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.741970 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.744330 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.751946 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.762354 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.762331816 podStartE2EDuration="12.762331816s" podCreationTimestamp="2025-12-09 17:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:00:42.75569023 +0000 UTC m=+269.690429422" watchObservedRunningTime="2025-12-09 17:00:42.762331816 +0000 UTC m=+269.697070998" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.819164 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.890658 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-template-login\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.891290 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-template-error\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.891420 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.891542 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.891739 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-session\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.891906 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.892049 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6l7f\" (UniqueName: \"kubernetes.io/projected/bd265ad4-2b09-4071-a584-ccec125b7afd-kube-api-access-s6l7f\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.892216 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd265ad4-2b09-4071-a584-ccec125b7afd-audit-dir\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.892418 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.892668 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.892829 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.893023 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-audit-policies\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.893181 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.893322 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.947449 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.948315 4853 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.994828 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.995300 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.995558 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-audit-policies\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.995870 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.996106 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.996719 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.996879 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-audit-policies\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.996916 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.997484 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.998689 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-template-login\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.999400 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-template-error\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:42 crc kubenswrapper[4853]: I1209 17:00:42.999716 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.000801 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.001072 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-session\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.001313 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.005723 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-template-login\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.005730 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6l7f\" (UniqueName: \"kubernetes.io/projected/bd265ad4-2b09-4071-a584-ccec125b7afd-kube-api-access-s6l7f\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.005954 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.003994 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.006058 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd265ad4-2b09-4071-a584-ccec125b7afd-audit-dir\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.006171 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd265ad4-2b09-4071-a584-ccec125b7afd-audit-dir\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.006638 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.007378 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.008369 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.009376 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.009421 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-user-template-error\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.010562 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd265ad4-2b09-4071-a584-ccec125b7afd-v4-0-config-system-session\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.036279 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6l7f\" (UniqueName: \"kubernetes.io/projected/bd265ad4-2b09-4071-a584-ccec125b7afd-kube-api-access-s6l7f\") pod \"oauth-openshift-6fdcc7ff8c-2rg6l\" (UID: \"bd265ad4-2b09-4071-a584-ccec125b7afd\") " pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.043366 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.626772 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 17:00:43 crc kubenswrapper[4853]: I1209 17:00:43.719154 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.194125 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.223676 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.260481 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.349692 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.461767 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.557689 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.577337 4853 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.852865 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.907951 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 17:00:44 crc kubenswrapper[4853]: I1209 17:00:44.938336 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.217557 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.257237 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.318647 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.422440 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.550037 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.551488 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.612983 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.745250 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.859154 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.900685 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 17:00:45 crc kubenswrapper[4853]: I1209 17:00:45.947237 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 17:00:46 crc kubenswrapper[4853]: I1209 17:00:46.186967 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 17:00:46 crc kubenswrapper[4853]: I1209 17:00:46.269443 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 17:00:46 crc kubenswrapper[4853]: I1209 17:00:46.371693 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 17:00:46 crc kubenswrapper[4853]: I1209 17:00:46.601878 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 17:00:46 crc kubenswrapper[4853]: I1209 17:00:46.608355 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 17:00:46 crc kubenswrapper[4853]: I1209 17:00:46.611500 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 17:00:46 crc kubenswrapper[4853]: I1209 17:00:46.692484 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 17:00:46 crc kubenswrapper[4853]: I1209 17:00:46.753242 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 17:00:46 crc kubenswrapper[4853]: I1209 17:00:46.988197 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.063353 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.124419 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.147017 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.405544 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.454378 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.473837 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.522638 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.583635 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.608415 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.622770 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.899621 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 17:00:47 crc kubenswrapper[4853]: I1209 17:00:47.983390 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.018779 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.033640 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.061456 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.181922 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.245415 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.264309 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.264733 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.424007 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.442531 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.450744 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.452645 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.543387 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.544685 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.628498 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.687148 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.761238 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.891860 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 17:00:48 crc kubenswrapper[4853]: I1209 17:00:48.896122 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.058019 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.183998 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.202161 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.237763 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.259630 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.286583 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.291842 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.296397 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.380082 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.402212 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.447680 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.497515 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.531907 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.611406 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.767508 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.775341 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.778130 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.796817 4853 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.796884 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.796953 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.797515 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"1876e9489fe62dcdddb5169877a4721272cddd06da316af6a508e6d9036b0b15"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.797688 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://1876e9489fe62dcdddb5169877a4721272cddd06da316af6a508e6d9036b0b15" gracePeriod=30 Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.860789 4853 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.909337 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.963810 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.971563 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.982429 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 17:00:49 crc kubenswrapper[4853]: I1209 17:00:49.989386 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.033985 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.040495 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.187032 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.336008 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.407554 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.466702 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.534570 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.593671 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.683054 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.782029 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.839512 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.887664 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.890666 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.940956 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 17:00:50 crc kubenswrapper[4853]: I1209 17:00:50.991492 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.086344 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.154105 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.179169 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.268076 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.315818 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.429279 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.430417 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.472259 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.494232 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.574790 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.593260 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.608029 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.777062 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.796134 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.814074 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.852519 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.883218 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.919859 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 17:00:51 crc kubenswrapper[4853]: I1209 17:00:51.953535 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l"] Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.055881 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.115898 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.116424 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.146146 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.162304 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.213978 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.371450 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.432571 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l"] Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.484476 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.533756 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.544496 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.640230 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.643467 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.661523 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.677125 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.685073 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.751743 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" event={"ID":"bd265ad4-2b09-4071-a584-ccec125b7afd","Type":"ContainerStarted","Data":"3ca4c3a8e16d5cd0fca9d9bea69f15111d32ca284d444ac6b929d1f406f5dfd0"} Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.751842 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" event={"ID":"bd265ad4-2b09-4071-a584-ccec125b7afd","Type":"ContainerStarted","Data":"4f5e01dc86c6a9134a99c678c911d182d68218765cc87abd480813d72f1ea0a3"} Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.752060 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.755193 4853 patch_prober.go:28] interesting pod/oauth-openshift-6fdcc7ff8c-2rg6l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" start-of-body= Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.755300 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" podUID="bd265ad4-2b09-4071-a584-ccec125b7afd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.771908 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" podStartSLOduration=72.771894161 podStartE2EDuration="1m12.771894161s" podCreationTimestamp="2025-12-09 16:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:00:52.770977234 +0000 UTC m=+279.705716426" watchObservedRunningTime="2025-12-09 17:00:52.771894161 +0000 UTC m=+279.706633343" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.816520 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.863688 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.960856 4853 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 17:00:52 crc kubenswrapper[4853]: I1209 17:00:52.961069 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b" gracePeriod=5 Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.009409 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.058437 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fdcc7ff8c-2rg6l" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.067674 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.090496 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.100875 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.112520 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.125792 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.212968 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.318650 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.505857 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.583036 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.583684 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.615495 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.617041 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.795737 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.809526 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.852819 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.872886 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.881580 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.906038 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.907701 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 17:00:53 crc kubenswrapper[4853]: I1209 17:00:53.934837 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.173517 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.192797 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.236180 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.255744 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.271651 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.392693 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.420094 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.527753 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.542001 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.576517 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.629920 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.639919 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.648412 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 17:00:54 crc kubenswrapper[4853]: I1209 17:00:54.974871 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.014407 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.218934 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.250971 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.268497 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.289410 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.331793 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.333202 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.506466 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.594047 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.640186 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.658289 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.715997 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 17:00:55 crc kubenswrapper[4853]: I1209 17:00:55.895727 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.002368 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.116186 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.235309 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.258618 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.268712 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.288639 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.307172 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.328169 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.343912 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.396815 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.445094 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.671037 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.854050 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.900643 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 17:00:56 crc kubenswrapper[4853]: I1209 17:00:56.962476 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 17:00:57 crc kubenswrapper[4853]: I1209 17:00:57.216723 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 17:00:57 crc kubenswrapper[4853]: I1209 17:00:57.255058 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 17:00:57 crc kubenswrapper[4853]: I1209 17:00:57.483008 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 17:00:57 crc kubenswrapper[4853]: I1209 17:00:57.946325 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.119773 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.541177 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.541514 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.707724 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.707792 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.707825 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.707826 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.707912 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.707938 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.707907 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.707942 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.708075 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.708279 4853 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.708295 4853 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.708306 4853 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.708315 4853 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.720847 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.786151 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.786214 4853 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b" exitCode=137 Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.786261 4853 scope.go:117] "RemoveContainer" containerID="c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.786319 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.804650 4853 scope.go:117] "RemoveContainer" containerID="c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b" Dec 09 17:00:58 crc kubenswrapper[4853]: E1209 17:00:58.805133 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b\": container with ID starting with c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b not found: ID does not exist" containerID="c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.805199 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b"} err="failed to get container status \"c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b\": rpc error: code = NotFound desc = could not find container \"c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b\": container with ID starting with c25fbcf43af64863d6783b25aa7c54880964090636dc49da1689ca4b39e82f2b not found: ID does not exist" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.809720 4853 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 17:00:58 crc kubenswrapper[4853]: I1209 17:00:58.861375 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 17:00:59 crc kubenswrapper[4853]: I1209 17:00:59.307609 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 17:00:59 crc kubenswrapper[4853]: I1209 17:00:59.396967 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 17:00:59 crc kubenswrapper[4853]: I1209 17:00:59.574073 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 09 17:01:00 crc kubenswrapper[4853]: I1209 17:01:00.513621 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 17:01:13 crc kubenswrapper[4853]: I1209 17:01:13.424255 4853 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 09 17:01:18 crc kubenswrapper[4853]: I1209 17:01:18.632368 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvjb6"] Dec 09 17:01:18 crc kubenswrapper[4853]: I1209 17:01:18.633068 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" podUID="5ac21b02-cdf3-4f92-8f7b-898015277e7a" containerName="controller-manager" containerID="cri-o://00ee9d6714d3d2049cf11288d32e949f6a6a5ee09d25e202df592da63210ccfa" gracePeriod=30 Dec 09 17:01:18 crc kubenswrapper[4853]: I1209 17:01:18.666371 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qvjb6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 09 17:01:18 crc kubenswrapper[4853]: I1209 17:01:18.666448 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" podUID="5ac21b02-cdf3-4f92-8f7b-898015277e7a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 09 17:01:18 crc kubenswrapper[4853]: I1209 17:01:18.756083 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j"] Dec 09 17:01:18 crc kubenswrapper[4853]: I1209 17:01:18.756277 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" podUID="2bd00d6d-d30c-49c5-aa61-3392f0b29a86" containerName="route-controller-manager" containerID="cri-o://e80ce451813c3b13a5092780abffc1f0b5e3c40bc7fdf2bd28670d973bd25684" gracePeriod=30 Dec 09 17:01:20 crc kubenswrapper[4853]: I1209 17:01:20.923043 4853 generic.go:334] "Generic (PLEG): container finished" podID="5ac21b02-cdf3-4f92-8f7b-898015277e7a" containerID="00ee9d6714d3d2049cf11288d32e949f6a6a5ee09d25e202df592da63210ccfa" exitCode=0 Dec 09 17:01:20 crc kubenswrapper[4853]: I1209 17:01:20.923155 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" event={"ID":"5ac21b02-cdf3-4f92-8f7b-898015277e7a","Type":"ContainerDied","Data":"00ee9d6714d3d2049cf11288d32e949f6a6a5ee09d25e202df592da63210ccfa"} Dec 09 17:01:20 crc kubenswrapper[4853]: I1209 17:01:20.925558 4853 generic.go:334] "Generic (PLEG): container finished" podID="2bd00d6d-d30c-49c5-aa61-3392f0b29a86" containerID="e80ce451813c3b13a5092780abffc1f0b5e3c40bc7fdf2bd28670d973bd25684" exitCode=0 Dec 09 17:01:20 crc kubenswrapper[4853]: I1209 17:01:20.925620 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" event={"ID":"2bd00d6d-d30c-49c5-aa61-3392f0b29a86","Type":"ContainerDied","Data":"e80ce451813c3b13a5092780abffc1f0b5e3c40bc7fdf2bd28670d973bd25684"} Dec 09 17:01:20 crc kubenswrapper[4853]: I1209 17:01:20.927507 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 09 17:01:20 crc kubenswrapper[4853]: I1209 17:01:20.929228 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 17:01:20 crc kubenswrapper[4853]: I1209 17:01:20.929277 4853 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1876e9489fe62dcdddb5169877a4721272cddd06da316af6a508e6d9036b0b15" exitCode=137 Dec 09 17:01:20 crc kubenswrapper[4853]: I1209 17:01:20.929308 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1876e9489fe62dcdddb5169877a4721272cddd06da316af6a508e6d9036b0b15"} Dec 09 17:01:20 crc kubenswrapper[4853]: I1209 17:01:20.929343 4853 scope.go:117] "RemoveContainer" containerID="69264e0d4cbf38d213bf8ac947ef9bb125181f046af9e98d91fbf953b083fe7c" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.661614 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.688488 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-ccd675599-9prhn"] Dec 09 17:01:21 crc kubenswrapper[4853]: E1209 17:01:21.688814 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac21b02-cdf3-4f92-8f7b-898015277e7a" containerName="controller-manager" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.688829 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac21b02-cdf3-4f92-8f7b-898015277e7a" containerName="controller-manager" Dec 09 17:01:21 crc kubenswrapper[4853]: E1209 17:01:21.688837 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.688847 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.688953 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac21b02-cdf3-4f92-8f7b-898015277e7a" containerName="controller-manager" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.688969 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.689383 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.697590 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-ccd675599-9prhn"] Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.759098 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.827527 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac21b02-cdf3-4f92-8f7b-898015277e7a-serving-cert\") pod \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.827680 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzrb7\" (UniqueName: \"kubernetes.io/projected/5ac21b02-cdf3-4f92-8f7b-898015277e7a-kube-api-access-zzrb7\") pod \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.827730 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-config\") pod \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.827762 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-client-ca\") pod \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.827821 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-proxy-ca-bundles\") pod \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\" (UID: \"5ac21b02-cdf3-4f92-8f7b-898015277e7a\") " Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.828044 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qxl2\" (UniqueName: \"kubernetes.io/projected/84c31d58-db4e-4e85-875c-e43304ef3aa0-kube-api-access-7qxl2\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.828107 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84c31d58-db4e-4e85-875c-e43304ef3aa0-client-ca\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.828136 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c31d58-db4e-4e85-875c-e43304ef3aa0-config\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.828162 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84c31d58-db4e-4e85-875c-e43304ef3aa0-proxy-ca-bundles\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.828197 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c31d58-db4e-4e85-875c-e43304ef3aa0-serving-cert\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.828499 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ac21b02-cdf3-4f92-8f7b-898015277e7a" (UID: "5ac21b02-cdf3-4f92-8f7b-898015277e7a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.828521 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5ac21b02-cdf3-4f92-8f7b-898015277e7a" (UID: "5ac21b02-cdf3-4f92-8f7b-898015277e7a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.828691 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-config" (OuterVolumeSpecName: "config") pod "5ac21b02-cdf3-4f92-8f7b-898015277e7a" (UID: "5ac21b02-cdf3-4f92-8f7b-898015277e7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.832482 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac21b02-cdf3-4f92-8f7b-898015277e7a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ac21b02-cdf3-4f92-8f7b-898015277e7a" (UID: "5ac21b02-cdf3-4f92-8f7b-898015277e7a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.832581 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac21b02-cdf3-4f92-8f7b-898015277e7a-kube-api-access-zzrb7" (OuterVolumeSpecName: "kube-api-access-zzrb7") pod "5ac21b02-cdf3-4f92-8f7b-898015277e7a" (UID: "5ac21b02-cdf3-4f92-8f7b-898015277e7a"). InnerVolumeSpecName "kube-api-access-zzrb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.929437 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-client-ca\") pod \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.929515 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-config\") pod \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.929571 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-serving-cert\") pod \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.929636 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n5jk\" (UniqueName: \"kubernetes.io/projected/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-kube-api-access-8n5jk\") pod \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\" (UID: \"2bd00d6d-d30c-49c5-aa61-3392f0b29a86\") " Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.929801 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c31d58-db4e-4e85-875c-e43304ef3aa0-serving-cert\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930135 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qxl2\" (UniqueName: \"kubernetes.io/projected/84c31d58-db4e-4e85-875c-e43304ef3aa0-kube-api-access-7qxl2\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930173 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84c31d58-db4e-4e85-875c-e43304ef3aa0-client-ca\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930194 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c31d58-db4e-4e85-875c-e43304ef3aa0-config\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930213 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84c31d58-db4e-4e85-875c-e43304ef3aa0-proxy-ca-bundles\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930253 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ac21b02-cdf3-4f92-8f7b-898015277e7a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930265 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzrb7\" (UniqueName: \"kubernetes.io/projected/5ac21b02-cdf3-4f92-8f7b-898015277e7a-kube-api-access-zzrb7\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930275 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930283 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930292 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ac21b02-cdf3-4f92-8f7b-898015277e7a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930336 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-client-ca" (OuterVolumeSpecName: "client-ca") pod "2bd00d6d-d30c-49c5-aa61-3392f0b29a86" (UID: "2bd00d6d-d30c-49c5-aa61-3392f0b29a86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.930388 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-config" (OuterVolumeSpecName: "config") pod "2bd00d6d-d30c-49c5-aa61-3392f0b29a86" (UID: "2bd00d6d-d30c-49c5-aa61-3392f0b29a86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.931893 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84c31d58-db4e-4e85-875c-e43304ef3aa0-proxy-ca-bundles\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.933342 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c31d58-db4e-4e85-875c-e43304ef3aa0-config\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.933590 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84c31d58-db4e-4e85-875c-e43304ef3aa0-client-ca\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.936373 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2bd00d6d-d30c-49c5-aa61-3392f0b29a86" (UID: "2bd00d6d-d30c-49c5-aa61-3392f0b29a86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.936374 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-kube-api-access-8n5jk" (OuterVolumeSpecName: "kube-api-access-8n5jk") pod "2bd00d6d-d30c-49c5-aa61-3392f0b29a86" (UID: "2bd00d6d-d30c-49c5-aa61-3392f0b29a86"). InnerVolumeSpecName "kube-api-access-8n5jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.939563 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84c31d58-db4e-4e85-875c-e43304ef3aa0-serving-cert\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.940388 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" event={"ID":"5ac21b02-cdf3-4f92-8f7b-898015277e7a","Type":"ContainerDied","Data":"dcda289da33a93e3227420e849f433c2e5dc668cc9613be96f35dd332627871f"} Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.940413 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qvjb6" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.940445 4853 scope.go:117] "RemoveContainer" containerID="00ee9d6714d3d2049cf11288d32e949f6a6a5ee09d25e202df592da63210ccfa" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.943305 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" event={"ID":"2bd00d6d-d30c-49c5-aa61-3392f0b29a86","Type":"ContainerDied","Data":"fd3802de2b612c33b348fde56330af6595d7b48b120b0ecd8a7416e49d4a1985"} Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.944776 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.950943 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.952911 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b46845b9bd59abf979be3332e911f99d2d5fba035833efe72316076ba7a7f60"} Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.956074 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qxl2\" (UniqueName: \"kubernetes.io/projected/84c31d58-db4e-4e85-875c-e43304ef3aa0-kube-api-access-7qxl2\") pod \"controller-manager-ccd675599-9prhn\" (UID: \"84c31d58-db4e-4e85-875c-e43304ef3aa0\") " pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:21 crc kubenswrapper[4853]: I1209 17:01:21.998695 4853 scope.go:117] "RemoveContainer" containerID="e80ce451813c3b13a5092780abffc1f0b5e3c40bc7fdf2bd28670d973bd25684" Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.008686 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j"] Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.014696 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sm7j"] Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.012329 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.020285 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvjb6"] Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.023890 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qvjb6"] Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.031254 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.031287 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.031301 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n5jk\" (UniqueName: \"kubernetes.io/projected/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-kube-api-access-8n5jk\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.031315 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd00d6d-d30c-49c5-aa61-3392f0b29a86-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.211923 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-ccd675599-9prhn"] Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.962750 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" event={"ID":"84c31d58-db4e-4e85-875c-e43304ef3aa0","Type":"ContainerStarted","Data":"452b5c01b874563309b8fccccc70f5139913c6ef5033f909abfaa41c89b08c00"} Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.963022 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" event={"ID":"84c31d58-db4e-4e85-875c-e43304ef3aa0","Type":"ContainerStarted","Data":"c74dc1f2da7377f80c245f6e7e69654f4b65d0efab5d896d4f9ef4aa91dd4654"} Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.963508 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.967509 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" Dec 09 17:01:22 crc kubenswrapper[4853]: I1209 17:01:22.981864 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-ccd675599-9prhn" podStartSLOduration=4.981845553 podStartE2EDuration="4.981845553s" podCreationTimestamp="2025-12-09 17:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:01:22.976207967 +0000 UTC m=+309.910947179" watchObservedRunningTime="2025-12-09 17:01:22.981845553 +0000 UTC m=+309.916584745" Dec 09 17:01:23 crc kubenswrapper[4853]: I1209 17:01:23.573936 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd00d6d-d30c-49c5-aa61-3392f0b29a86" path="/var/lib/kubelet/pods/2bd00d6d-d30c-49c5-aa61-3392f0b29a86/volumes" Dec 09 17:01:23 crc kubenswrapper[4853]: I1209 17:01:23.574709 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac21b02-cdf3-4f92-8f7b-898015277e7a" path="/var/lib/kubelet/pods/5ac21b02-cdf3-4f92-8f7b-898015277e7a/volumes" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.526797 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d"] Dec 09 17:01:24 crc kubenswrapper[4853]: E1209 17:01:24.527142 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd00d6d-d30c-49c5-aa61-3392f0b29a86" containerName="route-controller-manager" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.527172 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd00d6d-d30c-49c5-aa61-3392f0b29a86" containerName="route-controller-manager" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.527394 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd00d6d-d30c-49c5-aa61-3392f0b29a86" containerName="route-controller-manager" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.528173 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.534793 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.535055 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.535238 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.535342 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.535432 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.535525 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.538867 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d"] Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.665670 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/919ab32c-2a0b-4b53-b3b5-7e83fe562663-serving-cert\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.665759 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-client-ca\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.665807 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-config\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.665830 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s5d8\" (UniqueName: \"kubernetes.io/projected/919ab32c-2a0b-4b53-b3b5-7e83fe562663-kube-api-access-7s5d8\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.766895 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-config\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.767153 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s5d8\" (UniqueName: \"kubernetes.io/projected/919ab32c-2a0b-4b53-b3b5-7e83fe562663-kube-api-access-7s5d8\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.767205 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/919ab32c-2a0b-4b53-b3b5-7e83fe562663-serving-cert\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.767304 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-client-ca\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.767942 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-client-ca\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.768053 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-config\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.775215 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/919ab32c-2a0b-4b53-b3b5-7e83fe562663-serving-cert\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.785124 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s5d8\" (UniqueName: \"kubernetes.io/projected/919ab32c-2a0b-4b53-b3b5-7e83fe562663-kube-api-access-7s5d8\") pod \"route-controller-manager-7fcf9f74b6-4289d\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:24 crc kubenswrapper[4853]: I1209 17:01:24.847000 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:25 crc kubenswrapper[4853]: I1209 17:01:25.325779 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d"] Dec 09 17:01:25 crc kubenswrapper[4853]: W1209 17:01:25.329435 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod919ab32c_2a0b_4b53_b3b5_7e83fe562663.slice/crio-95c5d12af024950347feb5523dd99555658e47c4bbf4c7482cfa4e01415313a1 WatchSource:0}: Error finding container 95c5d12af024950347feb5523dd99555658e47c4bbf4c7482cfa4e01415313a1: Status 404 returned error can't find the container with id 95c5d12af024950347feb5523dd99555658e47c4bbf4c7482cfa4e01415313a1 Dec 09 17:01:25 crc kubenswrapper[4853]: I1209 17:01:25.986265 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" event={"ID":"919ab32c-2a0b-4b53-b3b5-7e83fe562663","Type":"ContainerStarted","Data":"b44fe0d5353d622d9b67840c6c484df7888982113d243e8ef1862f9285a21895"} Dec 09 17:01:25 crc kubenswrapper[4853]: I1209 17:01:25.986559 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" event={"ID":"919ab32c-2a0b-4b53-b3b5-7e83fe562663","Type":"ContainerStarted","Data":"95c5d12af024950347feb5523dd99555658e47c4bbf4c7482cfa4e01415313a1"} Dec 09 17:01:25 crc kubenswrapper[4853]: I1209 17:01:25.986825 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:26 crc kubenswrapper[4853]: I1209 17:01:26.004220 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" podStartSLOduration=8.004180264 podStartE2EDuration="8.004180264s" podCreationTimestamp="2025-12-09 17:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:01:26.002171646 +0000 UTC m=+312.936910838" watchObservedRunningTime="2025-12-09 17:01:26.004180264 +0000 UTC m=+312.938919446" Dec 09 17:01:26 crc kubenswrapper[4853]: I1209 17:01:26.048631 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:28 crc kubenswrapper[4853]: I1209 17:01:28.841737 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 17:01:29 crc kubenswrapper[4853]: I1209 17:01:29.797158 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 17:01:29 crc kubenswrapper[4853]: I1209 17:01:29.801309 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 17:01:38 crc kubenswrapper[4853]: I1209 17:01:38.845087 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 17:01:39 crc kubenswrapper[4853]: I1209 17:01:39.916571 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d"] Dec 09 17:01:39 crc kubenswrapper[4853]: I1209 17:01:39.916799 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" podUID="919ab32c-2a0b-4b53-b3b5-7e83fe562663" containerName="route-controller-manager" containerID="cri-o://b44fe0d5353d622d9b67840c6c484df7888982113d243e8ef1862f9285a21895" gracePeriod=30 Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.055180 4853 generic.go:334] "Generic (PLEG): container finished" podID="919ab32c-2a0b-4b53-b3b5-7e83fe562663" containerID="b44fe0d5353d622d9b67840c6c484df7888982113d243e8ef1862f9285a21895" exitCode=0 Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.055223 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" event={"ID":"919ab32c-2a0b-4b53-b3b5-7e83fe562663","Type":"ContainerDied","Data":"b44fe0d5353d622d9b67840c6c484df7888982113d243e8ef1862f9285a21895"} Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.450176 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.465895 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-config\") pod \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.465951 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-client-ca\") pod \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.466018 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/919ab32c-2a0b-4b53-b3b5-7e83fe562663-serving-cert\") pod \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.466054 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s5d8\" (UniqueName: \"kubernetes.io/projected/919ab32c-2a0b-4b53-b3b5-7e83fe562663-kube-api-access-7s5d8\") pod \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\" (UID: \"919ab32c-2a0b-4b53-b3b5-7e83fe562663\") " Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.467072 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-client-ca" (OuterVolumeSpecName: "client-ca") pod "919ab32c-2a0b-4b53-b3b5-7e83fe562663" (UID: "919ab32c-2a0b-4b53-b3b5-7e83fe562663"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.467170 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-config" (OuterVolumeSpecName: "config") pod "919ab32c-2a0b-4b53-b3b5-7e83fe562663" (UID: "919ab32c-2a0b-4b53-b3b5-7e83fe562663"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.473863 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919ab32c-2a0b-4b53-b3b5-7e83fe562663-kube-api-access-7s5d8" (OuterVolumeSpecName: "kube-api-access-7s5d8") pod "919ab32c-2a0b-4b53-b3b5-7e83fe562663" (UID: "919ab32c-2a0b-4b53-b3b5-7e83fe562663"). InnerVolumeSpecName "kube-api-access-7s5d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.476766 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919ab32c-2a0b-4b53-b3b5-7e83fe562663-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "919ab32c-2a0b-4b53-b3b5-7e83fe562663" (UID: "919ab32c-2a0b-4b53-b3b5-7e83fe562663"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.567143 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s5d8\" (UniqueName: \"kubernetes.io/projected/919ab32c-2a0b-4b53-b3b5-7e83fe562663-kube-api-access-7s5d8\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.567170 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.567180 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/919ab32c-2a0b-4b53-b3b5-7e83fe562663-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:40 crc kubenswrapper[4853]: I1209 17:01:40.567188 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/919ab32c-2a0b-4b53-b3b5-7e83fe562663-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.074181 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" event={"ID":"919ab32c-2a0b-4b53-b3b5-7e83fe562663","Type":"ContainerDied","Data":"95c5d12af024950347feb5523dd99555658e47c4bbf4c7482cfa4e01415313a1"} Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.074492 4853 scope.go:117] "RemoveContainer" containerID="b44fe0d5353d622d9b67840c6c484df7888982113d243e8ef1862f9285a21895" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.074585 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.112350 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d"] Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.116247 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4289d"] Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.535338 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k"] Dec 09 17:01:41 crc kubenswrapper[4853]: E1209 17:01:41.535549 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919ab32c-2a0b-4b53-b3b5-7e83fe562663" containerName="route-controller-manager" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.535562 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="919ab32c-2a0b-4b53-b3b5-7e83fe562663" containerName="route-controller-manager" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.535701 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="919ab32c-2a0b-4b53-b3b5-7e83fe562663" containerName="route-controller-manager" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.536065 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.537827 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.537844 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.538336 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.541031 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.541152 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.541704 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.554182 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k"] Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.575174 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919ab32c-2a0b-4b53-b3b5-7e83fe562663" path="/var/lib/kubelet/pods/919ab32c-2a0b-4b53-b3b5-7e83fe562663/volumes" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.578390 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ccb41f-47c7-4367-8410-574281b283d2-serving-cert\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.579219 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5qjw\" (UniqueName: \"kubernetes.io/projected/45ccb41f-47c7-4367-8410-574281b283d2-kube-api-access-k5qjw\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.579289 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-config\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.579337 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-client-ca\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.681547 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ccb41f-47c7-4367-8410-574281b283d2-serving-cert\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.681656 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5qjw\" (UniqueName: \"kubernetes.io/projected/45ccb41f-47c7-4367-8410-574281b283d2-kube-api-access-k5qjw\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.681684 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-config\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.681706 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-client-ca\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.682623 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-client-ca\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.682959 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-config\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.688075 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ccb41f-47c7-4367-8410-574281b283d2-serving-cert\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.709182 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5qjw\" (UniqueName: \"kubernetes.io/projected/45ccb41f-47c7-4367-8410-574281b283d2-kube-api-access-k5qjw\") pod \"route-controller-manager-dbb5d498c-m9v7k\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:41 crc kubenswrapper[4853]: I1209 17:01:41.853871 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:42 crc kubenswrapper[4853]: I1209 17:01:42.274712 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k"] Dec 09 17:01:42 crc kubenswrapper[4853]: W1209 17:01:42.286019 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ccb41f_47c7_4367_8410_574281b283d2.slice/crio-d82206b7a10eeffdf235dc9cc5fcf980372e2bfb7115259b6c278d9511c87848 WatchSource:0}: Error finding container d82206b7a10eeffdf235dc9cc5fcf980372e2bfb7115259b6c278d9511c87848: Status 404 returned error can't find the container with id d82206b7a10eeffdf235dc9cc5fcf980372e2bfb7115259b6c278d9511c87848 Dec 09 17:01:43 crc kubenswrapper[4853]: I1209 17:01:43.087440 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" event={"ID":"45ccb41f-47c7-4367-8410-574281b283d2","Type":"ContainerStarted","Data":"5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674"} Dec 09 17:01:43 crc kubenswrapper[4853]: I1209 17:01:43.087849 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:43 crc kubenswrapper[4853]: I1209 17:01:43.087864 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" event={"ID":"45ccb41f-47c7-4367-8410-574281b283d2","Type":"ContainerStarted","Data":"d82206b7a10eeffdf235dc9cc5fcf980372e2bfb7115259b6c278d9511c87848"} Dec 09 17:01:43 crc kubenswrapper[4853]: I1209 17:01:43.103021 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" podStartSLOduration=4.103004611 podStartE2EDuration="4.103004611s" podCreationTimestamp="2025-12-09 17:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:01:43.100891879 +0000 UTC m=+330.035631061" watchObservedRunningTime="2025-12-09 17:01:43.103004611 +0000 UTC m=+330.037743793" Dec 09 17:01:43 crc kubenswrapper[4853]: I1209 17:01:43.322417 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:58 crc kubenswrapper[4853]: I1209 17:01:58.601188 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k"] Dec 09 17:01:58 crc kubenswrapper[4853]: I1209 17:01:58.601782 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" podUID="45ccb41f-47c7-4367-8410-574281b283d2" containerName="route-controller-manager" containerID="cri-o://5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674" gracePeriod=30 Dec 09 17:01:58 crc kubenswrapper[4853]: I1209 17:01:58.978024 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.136766 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-client-ca\") pod \"45ccb41f-47c7-4367-8410-574281b283d2\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.136883 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5qjw\" (UniqueName: \"kubernetes.io/projected/45ccb41f-47c7-4367-8410-574281b283d2-kube-api-access-k5qjw\") pod \"45ccb41f-47c7-4367-8410-574281b283d2\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.136932 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ccb41f-47c7-4367-8410-574281b283d2-serving-cert\") pod \"45ccb41f-47c7-4367-8410-574281b283d2\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.136978 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-config\") pod \"45ccb41f-47c7-4367-8410-574281b283d2\" (UID: \"45ccb41f-47c7-4367-8410-574281b283d2\") " Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.137872 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "45ccb41f-47c7-4367-8410-574281b283d2" (UID: "45ccb41f-47c7-4367-8410-574281b283d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.137930 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-config" (OuterVolumeSpecName: "config") pod "45ccb41f-47c7-4367-8410-574281b283d2" (UID: "45ccb41f-47c7-4367-8410-574281b283d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.142730 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ccb41f-47c7-4367-8410-574281b283d2-kube-api-access-k5qjw" (OuterVolumeSpecName: "kube-api-access-k5qjw") pod "45ccb41f-47c7-4367-8410-574281b283d2" (UID: "45ccb41f-47c7-4367-8410-574281b283d2"). InnerVolumeSpecName "kube-api-access-k5qjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.147542 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ccb41f-47c7-4367-8410-574281b283d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45ccb41f-47c7-4367-8410-574281b283d2" (UID: "45ccb41f-47c7-4367-8410-574281b283d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.175932 4853 generic.go:334] "Generic (PLEG): container finished" podID="45ccb41f-47c7-4367-8410-574281b283d2" containerID="5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674" exitCode=0 Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.175985 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" event={"ID":"45ccb41f-47c7-4367-8410-574281b283d2","Type":"ContainerDied","Data":"5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674"} Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.175997 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.176053 4853 scope.go:117] "RemoveContainer" containerID="5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.176041 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k" event={"ID":"45ccb41f-47c7-4367-8410-574281b283d2","Type":"ContainerDied","Data":"d82206b7a10eeffdf235dc9cc5fcf980372e2bfb7115259b6c278d9511c87848"} Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.199623 4853 scope.go:117] "RemoveContainer" containerID="5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674" Dec 09 17:01:59 crc kubenswrapper[4853]: E1209 17:01:59.200213 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674\": container with ID starting with 5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674 not found: ID does not exist" containerID="5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.200247 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674"} err="failed to get container status \"5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674\": rpc error: code = NotFound desc = could not find container \"5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674\": container with ID starting with 5e31d3bc852e1eac525e49d770ae24e6148dffc150e5cfa5462575845ffcb674 not found: ID does not exist" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.210571 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k"] Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.214957 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dbb5d498c-m9v7k"] Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.237713 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.237741 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45ccb41f-47c7-4367-8410-574281b283d2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.237753 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5qjw\" (UniqueName: \"kubernetes.io/projected/45ccb41f-47c7-4367-8410-574281b283d2-kube-api-access-k5qjw\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.237763 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ccb41f-47c7-4367-8410-574281b283d2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:01:59 crc kubenswrapper[4853]: I1209 17:01:59.579555 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ccb41f-47c7-4367-8410-574281b283d2" path="/var/lib/kubelet/pods/45ccb41f-47c7-4367-8410-574281b283d2/volumes" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.551195 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd"] Dec 09 17:02:00 crc kubenswrapper[4853]: E1209 17:02:00.551436 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ccb41f-47c7-4367-8410-574281b283d2" containerName="route-controller-manager" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.551451 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ccb41f-47c7-4367-8410-574281b283d2" containerName="route-controller-manager" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.551575 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ccb41f-47c7-4367-8410-574281b283d2" containerName="route-controller-manager" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.551978 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.555844 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.556134 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.556848 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.557581 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.557621 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.557701 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.568395 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd"] Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.658930 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d31719e8-efad-469e-a768-64923b551a4e-client-ca\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.659203 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdm72\" (UniqueName: \"kubernetes.io/projected/d31719e8-efad-469e-a768-64923b551a4e-kube-api-access-kdm72\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.659254 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d31719e8-efad-469e-a768-64923b551a4e-serving-cert\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.659303 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31719e8-efad-469e-a768-64923b551a4e-config\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.760663 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d31719e8-efad-469e-a768-64923b551a4e-client-ca\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.760780 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdm72\" (UniqueName: \"kubernetes.io/projected/d31719e8-efad-469e-a768-64923b551a4e-kube-api-access-kdm72\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.760815 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d31719e8-efad-469e-a768-64923b551a4e-serving-cert\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.760864 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31719e8-efad-469e-a768-64923b551a4e-config\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.762782 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31719e8-efad-469e-a768-64923b551a4e-config\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.764041 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d31719e8-efad-469e-a768-64923b551a4e-client-ca\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.773902 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d31719e8-efad-469e-a768-64923b551a4e-serving-cert\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.787117 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdm72\" (UniqueName: \"kubernetes.io/projected/d31719e8-efad-469e-a768-64923b551a4e-kube-api-access-kdm72\") pod \"route-controller-manager-7fcf9f74b6-4wphd\" (UID: \"d31719e8-efad-469e-a768-64923b551a4e\") " pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:00 crc kubenswrapper[4853]: I1209 17:02:00.881352 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:01 crc kubenswrapper[4853]: I1209 17:02:01.287831 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd"] Dec 09 17:02:02 crc kubenswrapper[4853]: I1209 17:02:02.195519 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" event={"ID":"d31719e8-efad-469e-a768-64923b551a4e","Type":"ContainerStarted","Data":"b1698659ea394f31a65b081be4f31e145f276888dcd719f6752b3bb718df5963"} Dec 09 17:02:02 crc kubenswrapper[4853]: I1209 17:02:02.195822 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" event={"ID":"d31719e8-efad-469e-a768-64923b551a4e","Type":"ContainerStarted","Data":"f83fdee9280945ed043001d68038222248b7b4bba4d160d1ec3209e30f2518c0"} Dec 09 17:02:02 crc kubenswrapper[4853]: I1209 17:02:02.195878 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:02 crc kubenswrapper[4853]: I1209 17:02:02.201430 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" Dec 09 17:02:02 crc kubenswrapper[4853]: I1209 17:02:02.220512 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fcf9f74b6-4wphd" podStartSLOduration=4.220489653 podStartE2EDuration="4.220489653s" podCreationTimestamp="2025-12-09 17:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:02:02.214663674 +0000 UTC m=+349.149402866" watchObservedRunningTime="2025-12-09 17:02:02.220489653 +0000 UTC m=+349.155228855" Dec 09 17:02:20 crc kubenswrapper[4853]: I1209 17:02:20.956270 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r27lt"] Dec 09 17:02:20 crc kubenswrapper[4853]: I1209 17:02:20.957685 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:20 crc kubenswrapper[4853]: I1209 17:02:20.967978 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r27lt"] Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.047619 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7855d6a5-81a4-4b38-86a7-db82b4f86a52-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.047664 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7855d6a5-81a4-4b38-86a7-db82b4f86a52-trusted-ca\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.047715 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7855d6a5-81a4-4b38-86a7-db82b4f86a52-registry-tls\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.047730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7855d6a5-81a4-4b38-86a7-db82b4f86a52-bound-sa-token\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.047751 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7855d6a5-81a4-4b38-86a7-db82b4f86a52-registry-certificates\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.047820 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.047845 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcqg\" (UniqueName: \"kubernetes.io/projected/7855d6a5-81a4-4b38-86a7-db82b4f86a52-kube-api-access-nlcqg\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.047878 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7855d6a5-81a4-4b38-86a7-db82b4f86a52-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.069135 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.148767 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7855d6a5-81a4-4b38-86a7-db82b4f86a52-registry-tls\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.148823 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7855d6a5-81a4-4b38-86a7-db82b4f86a52-bound-sa-token\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.148846 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7855d6a5-81a4-4b38-86a7-db82b4f86a52-registry-certificates\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.148882 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlcqg\" (UniqueName: \"kubernetes.io/projected/7855d6a5-81a4-4b38-86a7-db82b4f86a52-kube-api-access-nlcqg\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.148922 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7855d6a5-81a4-4b38-86a7-db82b4f86a52-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.148953 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7855d6a5-81a4-4b38-86a7-db82b4f86a52-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.148973 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7855d6a5-81a4-4b38-86a7-db82b4f86a52-trusted-ca\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.150163 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7855d6a5-81a4-4b38-86a7-db82b4f86a52-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.150768 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7855d6a5-81a4-4b38-86a7-db82b4f86a52-registry-certificates\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.151346 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7855d6a5-81a4-4b38-86a7-db82b4f86a52-trusted-ca\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.155425 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7855d6a5-81a4-4b38-86a7-db82b4f86a52-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.159086 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7855d6a5-81a4-4b38-86a7-db82b4f86a52-registry-tls\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.164456 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7855d6a5-81a4-4b38-86a7-db82b4f86a52-bound-sa-token\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.168188 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlcqg\" (UniqueName: \"kubernetes.io/projected/7855d6a5-81a4-4b38-86a7-db82b4f86a52-kube-api-access-nlcqg\") pod \"image-registry-66df7c8f76-r27lt\" (UID: \"7855d6a5-81a4-4b38-86a7-db82b4f86a52\") " pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.276591 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:21 crc kubenswrapper[4853]: I1209 17:02:21.713657 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r27lt"] Dec 09 17:02:22 crc kubenswrapper[4853]: I1209 17:02:22.324425 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" event={"ID":"7855d6a5-81a4-4b38-86a7-db82b4f86a52","Type":"ContainerStarted","Data":"ab3edf54c55d5fc89430713b8548015c46eb337e2171c3c6cf7a0d3f62f2ec8a"} Dec 09 17:02:22 crc kubenswrapper[4853]: I1209 17:02:22.324788 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" event={"ID":"7855d6a5-81a4-4b38-86a7-db82b4f86a52","Type":"ContainerStarted","Data":"b3da0a3025f0997a013e23bae4cbf5802a618507b7c009642569ce93fbaed0ed"} Dec 09 17:02:22 crc kubenswrapper[4853]: I1209 17:02:22.324956 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:22 crc kubenswrapper[4853]: I1209 17:02:22.350827 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" podStartSLOduration=2.350807643 podStartE2EDuration="2.350807643s" podCreationTimestamp="2025-12-09 17:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:02:22.345964443 +0000 UTC m=+369.280703635" watchObservedRunningTime="2025-12-09 17:02:22.350807643 +0000 UTC m=+369.285546825" Dec 09 17:02:28 crc kubenswrapper[4853]: I1209 17:02:28.593469 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:02:28 crc kubenswrapper[4853]: I1209 17:02:28.594124 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.349297 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtlzd"] Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.350225 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vtlzd" podUID="ed152681-91c0-40d0-be74-21f8e751080d" containerName="registry-server" containerID="cri-o://b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400" gracePeriod=30 Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.355009 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t2fjn"] Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.355222 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t2fjn" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerName="registry-server" containerID="cri-o://ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a" gracePeriod=30 Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.366797 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cw4kq"] Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.367065 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" podUID="7e25b771-f015-46fc-ad94-e8b9aa6b49cb" containerName="marketplace-operator" containerID="cri-o://4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52" gracePeriod=30 Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.376781 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg69z"] Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.377029 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lg69z" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" containerName="registry-server" containerID="cri-o://e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2" gracePeriod=30 Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.394327 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rszvg"] Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.395109 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rszvg" podUID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerName="registry-server" containerID="cri-o://65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4" gracePeriod=30 Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.405120 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5kqd6"] Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.406064 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.411696 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5kqd6"] Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.424834 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98a172d2-8ea2-44a0-959d-0b343cceeaec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5kqd6\" (UID: \"98a172d2-8ea2-44a0-959d-0b343cceeaec\") " pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.424883 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/98a172d2-8ea2-44a0-959d-0b343cceeaec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5kqd6\" (UID: \"98a172d2-8ea2-44a0-959d-0b343cceeaec\") " pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.424915 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jz67\" (UniqueName: \"kubernetes.io/projected/98a172d2-8ea2-44a0-959d-0b343cceeaec-kube-api-access-5jz67\") pod \"marketplace-operator-79b997595-5kqd6\" (UID: \"98a172d2-8ea2-44a0-959d-0b343cceeaec\") " pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.525933 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98a172d2-8ea2-44a0-959d-0b343cceeaec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5kqd6\" (UID: \"98a172d2-8ea2-44a0-959d-0b343cceeaec\") " pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.526236 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/98a172d2-8ea2-44a0-959d-0b343cceeaec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5kqd6\" (UID: \"98a172d2-8ea2-44a0-959d-0b343cceeaec\") " pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.526316 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jz67\" (UniqueName: \"kubernetes.io/projected/98a172d2-8ea2-44a0-959d-0b343cceeaec-kube-api-access-5jz67\") pod \"marketplace-operator-79b997595-5kqd6\" (UID: \"98a172d2-8ea2-44a0-959d-0b343cceeaec\") " pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.527893 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98a172d2-8ea2-44a0-959d-0b343cceeaec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5kqd6\" (UID: \"98a172d2-8ea2-44a0-959d-0b343cceeaec\") " pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.533850 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/98a172d2-8ea2-44a0-959d-0b343cceeaec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5kqd6\" (UID: \"98a172d2-8ea2-44a0-959d-0b343cceeaec\") " pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.545728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jz67\" (UniqueName: \"kubernetes.io/projected/98a172d2-8ea2-44a0-959d-0b343cceeaec-kube-api-access-5jz67\") pod \"marketplace-operator-79b997595-5kqd6\" (UID: \"98a172d2-8ea2-44a0-959d-0b343cceeaec\") " pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.736705 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.864184 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.867259 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.867692 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2fjn" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.870496 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 17:02:34 crc kubenswrapper[4853]: I1209 17:02:34.876294 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038228 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-catalog-content\") pod \"e2b524f3-9a6d-4d43-a023-3b8deee90128\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038306 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-operator-metrics\") pod \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038380 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-utilities\") pod \"e2b524f3-9a6d-4d43-a023-3b8deee90128\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038437 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-trusted-ca\") pod \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038462 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-utilities\") pod \"5baa3927-1796-48fe-9238-27f8717fbe89\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038489 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-utilities\") pod \"ed152681-91c0-40d0-be74-21f8e751080d\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038515 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-catalog-content\") pod \"ed152681-91c0-40d0-be74-21f8e751080d\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038542 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5h7h\" (UniqueName: \"kubernetes.io/projected/ed152681-91c0-40d0-be74-21f8e751080d-kube-api-access-d5h7h\") pod \"ed152681-91c0-40d0-be74-21f8e751080d\" (UID: \"ed152681-91c0-40d0-be74-21f8e751080d\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038570 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-utilities\") pod \"cf894738-9ac9-49cf-a5be-c4414628c89c\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038621 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p74b\" (UniqueName: \"kubernetes.io/projected/e2b524f3-9a6d-4d43-a023-3b8deee90128-kube-api-access-6p74b\") pod \"e2b524f3-9a6d-4d43-a023-3b8deee90128\" (UID: \"e2b524f3-9a6d-4d43-a023-3b8deee90128\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038648 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-catalog-content\") pod \"cf894738-9ac9-49cf-a5be-c4414628c89c\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038680 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtg2z\" (UniqueName: \"kubernetes.io/projected/5baa3927-1796-48fe-9238-27f8717fbe89-kube-api-access-vtg2z\") pod \"5baa3927-1796-48fe-9238-27f8717fbe89\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038723 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp428\" (UniqueName: \"kubernetes.io/projected/cf894738-9ac9-49cf-a5be-c4414628c89c-kube-api-access-xp428\") pod \"cf894738-9ac9-49cf-a5be-c4414628c89c\" (UID: \"cf894738-9ac9-49cf-a5be-c4414628c89c\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038761 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-catalog-content\") pod \"5baa3927-1796-48fe-9238-27f8717fbe89\" (UID: \"5baa3927-1796-48fe-9238-27f8717fbe89\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.038794 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vb8w\" (UniqueName: \"kubernetes.io/projected/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-kube-api-access-9vb8w\") pod \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\" (UID: \"7e25b771-f015-46fc-ad94-e8b9aa6b49cb\") " Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.039173 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7e25b771-f015-46fc-ad94-e8b9aa6b49cb" (UID: "7e25b771-f015-46fc-ad94-e8b9aa6b49cb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.039452 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-utilities" (OuterVolumeSpecName: "utilities") pod "5baa3927-1796-48fe-9238-27f8717fbe89" (UID: "5baa3927-1796-48fe-9238-27f8717fbe89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.039464 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-utilities" (OuterVolumeSpecName: "utilities") pod "e2b524f3-9a6d-4d43-a023-3b8deee90128" (UID: "e2b524f3-9a6d-4d43-a023-3b8deee90128"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.039969 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-utilities" (OuterVolumeSpecName: "utilities") pod "cf894738-9ac9-49cf-a5be-c4414628c89c" (UID: "cf894738-9ac9-49cf-a5be-c4414628c89c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.040138 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-utilities" (OuterVolumeSpecName: "utilities") pod "ed152681-91c0-40d0-be74-21f8e751080d" (UID: "ed152681-91c0-40d0-be74-21f8e751080d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.043570 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b524f3-9a6d-4d43-a023-3b8deee90128-kube-api-access-6p74b" (OuterVolumeSpecName: "kube-api-access-6p74b") pod "e2b524f3-9a6d-4d43-a023-3b8deee90128" (UID: "e2b524f3-9a6d-4d43-a023-3b8deee90128"). InnerVolumeSpecName "kube-api-access-6p74b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.043631 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed152681-91c0-40d0-be74-21f8e751080d-kube-api-access-d5h7h" (OuterVolumeSpecName: "kube-api-access-d5h7h") pod "ed152681-91c0-40d0-be74-21f8e751080d" (UID: "ed152681-91c0-40d0-be74-21f8e751080d"). InnerVolumeSpecName "kube-api-access-d5h7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.043842 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-kube-api-access-9vb8w" (OuterVolumeSpecName: "kube-api-access-9vb8w") pod "7e25b771-f015-46fc-ad94-e8b9aa6b49cb" (UID: "7e25b771-f015-46fc-ad94-e8b9aa6b49cb"). InnerVolumeSpecName "kube-api-access-9vb8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.044571 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5baa3927-1796-48fe-9238-27f8717fbe89-kube-api-access-vtg2z" (OuterVolumeSpecName: "kube-api-access-vtg2z") pod "5baa3927-1796-48fe-9238-27f8717fbe89" (UID: "5baa3927-1796-48fe-9238-27f8717fbe89"). InnerVolumeSpecName "kube-api-access-vtg2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.044808 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf894738-9ac9-49cf-a5be-c4414628c89c-kube-api-access-xp428" (OuterVolumeSpecName: "kube-api-access-xp428") pod "cf894738-9ac9-49cf-a5be-c4414628c89c" (UID: "cf894738-9ac9-49cf-a5be-c4414628c89c"). InnerVolumeSpecName "kube-api-access-xp428". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.047543 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7e25b771-f015-46fc-ad94-e8b9aa6b49cb" (UID: "7e25b771-f015-46fc-ad94-e8b9aa6b49cb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.065070 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5baa3927-1796-48fe-9238-27f8717fbe89" (UID: "5baa3927-1796-48fe-9238-27f8717fbe89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.097098 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed152681-91c0-40d0-be74-21f8e751080d" (UID: "ed152681-91c0-40d0-be74-21f8e751080d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.114418 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2b524f3-9a6d-4d43-a023-3b8deee90128" (UID: "e2b524f3-9a6d-4d43-a023-3b8deee90128"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140681 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp428\" (UniqueName: \"kubernetes.io/projected/cf894738-9ac9-49cf-a5be-c4414628c89c-kube-api-access-xp428\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140744 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140757 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vb8w\" (UniqueName: \"kubernetes.io/projected/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-kube-api-access-9vb8w\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140773 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140786 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140817 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b524f3-9a6d-4d43-a023-3b8deee90128-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140830 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e25b771-f015-46fc-ad94-e8b9aa6b49cb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140840 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5baa3927-1796-48fe-9238-27f8717fbe89-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140851 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140861 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed152681-91c0-40d0-be74-21f8e751080d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140901 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5h7h\" (UniqueName: \"kubernetes.io/projected/ed152681-91c0-40d0-be74-21f8e751080d-kube-api-access-d5h7h\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140911 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140924 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p74b\" (UniqueName: \"kubernetes.io/projected/e2b524f3-9a6d-4d43-a023-3b8deee90128-kube-api-access-6p74b\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.140935 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtg2z\" (UniqueName: \"kubernetes.io/projected/5baa3927-1796-48fe-9238-27f8717fbe89-kube-api-access-vtg2z\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.165040 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf894738-9ac9-49cf-a5be-c4414628c89c" (UID: "cf894738-9ac9-49cf-a5be-c4414628c89c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.167033 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5kqd6"] Dec 09 17:02:35 crc kubenswrapper[4853]: W1209 17:02:35.169472 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98a172d2_8ea2_44a0_959d_0b343cceeaec.slice/crio-614719a77e7ba5d30d36dd98ffdb40faff0424c0cd5b98b9a4ebcb99a9781543 WatchSource:0}: Error finding container 614719a77e7ba5d30d36dd98ffdb40faff0424c0cd5b98b9a4ebcb99a9781543: Status 404 returned error can't find the container with id 614719a77e7ba5d30d36dd98ffdb40faff0424c0cd5b98b9a4ebcb99a9781543 Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.242394 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf894738-9ac9-49cf-a5be-c4414628c89c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.404829 4853 generic.go:334] "Generic (PLEG): container finished" podID="ed152681-91c0-40d0-be74-21f8e751080d" containerID="b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400" exitCode=0 Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.404914 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtlzd" event={"ID":"ed152681-91c0-40d0-be74-21f8e751080d","Type":"ContainerDied","Data":"b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.404948 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vtlzd" event={"ID":"ed152681-91c0-40d0-be74-21f8e751080d","Type":"ContainerDied","Data":"f5da855fd30cc91c38ae71557b78a675e82cb9a5a114d6b830f58c51e4e92477"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.404978 4853 scope.go:117] "RemoveContainer" containerID="b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.405163 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vtlzd" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.412232 4853 generic.go:334] "Generic (PLEG): container finished" podID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerID="65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4" exitCode=0 Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.412308 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rszvg" event={"ID":"cf894738-9ac9-49cf-a5be-c4414628c89c","Type":"ContainerDied","Data":"65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.412698 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rszvg" event={"ID":"cf894738-9ac9-49cf-a5be-c4414628c89c","Type":"ContainerDied","Data":"a753641ad8b2db91c049871b8affc466599d258893d6eef42069ef6b88432e86"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.412385 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rszvg" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.417154 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" event={"ID":"98a172d2-8ea2-44a0-959d-0b343cceeaec","Type":"ContainerStarted","Data":"620d1faedc7bb4127b03828060ab4798ea7f743e430630088ded10193e7b08c5"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.417290 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.417310 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" event={"ID":"98a172d2-8ea2-44a0-959d-0b343cceeaec","Type":"ContainerStarted","Data":"614719a77e7ba5d30d36dd98ffdb40faff0424c0cd5b98b9a4ebcb99a9781543"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.418325 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5kqd6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" start-of-body= Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.418390 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" podUID="98a172d2-8ea2-44a0-959d-0b343cceeaec" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.425439 4853 generic.go:334] "Generic (PLEG): container finished" podID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerID="ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a" exitCode=0 Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.425503 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2fjn" event={"ID":"e2b524f3-9a6d-4d43-a023-3b8deee90128","Type":"ContainerDied","Data":"ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.425861 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2fjn" event={"ID":"e2b524f3-9a6d-4d43-a023-3b8deee90128","Type":"ContainerDied","Data":"ebb7fb890bd9078f12f0986b5743de2599f7cf1100b2287b194d5a39a48c6d84"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.425532 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2fjn" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.427252 4853 scope.go:117] "RemoveContainer" containerID="4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.428556 4853 generic.go:334] "Generic (PLEG): container finished" podID="5baa3927-1796-48fe-9238-27f8717fbe89" containerID="e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2" exitCode=0 Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.428629 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg69z" event={"ID":"5baa3927-1796-48fe-9238-27f8717fbe89","Type":"ContainerDied","Data":"e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.428649 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lg69z" event={"ID":"5baa3927-1796-48fe-9238-27f8717fbe89","Type":"ContainerDied","Data":"93b6ffa6f9a94682d3d7e1713c38daffb4d919b92d5be31ba03775ac830f7eb9"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.428746 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lg69z" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.431127 4853 generic.go:334] "Generic (PLEG): container finished" podID="7e25b771-f015-46fc-ad94-e8b9aa6b49cb" containerID="4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52" exitCode=0 Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.431164 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" event={"ID":"7e25b771-f015-46fc-ad94-e8b9aa6b49cb","Type":"ContainerDied","Data":"4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.431188 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" event={"ID":"7e25b771-f015-46fc-ad94-e8b9aa6b49cb","Type":"ContainerDied","Data":"fc2864a85e47457d1931309f652c210f90466a8190a7d42e4edb280b9e891b13"} Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.431260 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cw4kq" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.436497 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" podStartSLOduration=1.436481592 podStartE2EDuration="1.436481592s" podCreationTimestamp="2025-12-09 17:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:02:35.435647489 +0000 UTC m=+382.370386681" watchObservedRunningTime="2025-12-09 17:02:35.436481592 +0000 UTC m=+382.371220784" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.451069 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rszvg"] Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.463105 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rszvg"] Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.470695 4853 scope.go:117] "RemoveContainer" containerID="4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.482887 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vtlzd"] Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.486022 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vtlzd"] Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.488202 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg69z"] Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.493794 4853 scope.go:117] "RemoveContainer" containerID="b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.494307 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400\": container with ID starting with b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400 not found: ID does not exist" containerID="b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.494349 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400"} err="failed to get container status \"b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400\": rpc error: code = NotFound desc = could not find container \"b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400\": container with ID starting with b6a6011c035f7addf2560971d228d011fcee7254b65720cbee6d3699af1bd400 not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.494380 4853 scope.go:117] "RemoveContainer" containerID="4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.496014 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lg69z"] Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.496112 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531\": container with ID starting with 4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531 not found: ID does not exist" containerID="4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.496140 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531"} err="failed to get container status \"4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531\": rpc error: code = NotFound desc = could not find container \"4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531\": container with ID starting with 4a1ed9d4d5ed5151955d606558ac6ddfe8ee15c4095f1d21b8470e868a20f531 not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.496160 4853 scope.go:117] "RemoveContainer" containerID="4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.496528 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8\": container with ID starting with 4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8 not found: ID does not exist" containerID="4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.496557 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8"} err="failed to get container status \"4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8\": rpc error: code = NotFound desc = could not find container \"4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8\": container with ID starting with 4cc8ebffad092dbb9c59db6d3e470ed75c313b1ec687467d993c0d9858ff8aa8 not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.496573 4853 scope.go:117] "RemoveContainer" containerID="65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.501665 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t2fjn"] Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.503910 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t2fjn"] Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.510066 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cw4kq"] Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.516503 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cw4kq"] Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.543524 4853 scope.go:117] "RemoveContainer" containerID="018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.566141 4853 scope.go:117] "RemoveContainer" containerID="d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.573542 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" path="/var/lib/kubelet/pods/5baa3927-1796-48fe-9238-27f8717fbe89/volumes" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.574302 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e25b771-f015-46fc-ad94-e8b9aa6b49cb" path="/var/lib/kubelet/pods/7e25b771-f015-46fc-ad94-e8b9aa6b49cb/volumes" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.574973 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf894738-9ac9-49cf-a5be-c4414628c89c" path="/var/lib/kubelet/pods/cf894738-9ac9-49cf-a5be-c4414628c89c/volumes" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.576262 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" path="/var/lib/kubelet/pods/e2b524f3-9a6d-4d43-a023-3b8deee90128/volumes" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.576930 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed152681-91c0-40d0-be74-21f8e751080d" path="/var/lib/kubelet/pods/ed152681-91c0-40d0-be74-21f8e751080d/volumes" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.582441 4853 scope.go:117] "RemoveContainer" containerID="65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.582927 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4\": container with ID starting with 65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4 not found: ID does not exist" containerID="65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.582971 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4"} err="failed to get container status \"65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4\": rpc error: code = NotFound desc = could not find container \"65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4\": container with ID starting with 65362eed0826d63752421a765cf051d0cca57e48ce3321cc317895239bcd42e4 not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.582999 4853 scope.go:117] "RemoveContainer" containerID="018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.583763 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa\": container with ID starting with 018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa not found: ID does not exist" containerID="018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.583789 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa"} err="failed to get container status \"018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa\": rpc error: code = NotFound desc = could not find container \"018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa\": container with ID starting with 018435e948625f70d95f1cf6bff134fefe5b648a907c9f59980f7d81825a2efa not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.583810 4853 scope.go:117] "RemoveContainer" containerID="d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.584112 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7\": container with ID starting with d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7 not found: ID does not exist" containerID="d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.584143 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7"} err="failed to get container status \"d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7\": rpc error: code = NotFound desc = could not find container \"d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7\": container with ID starting with d88f7d9ce6facf1424347b454980338d1c76bf424fcad6090be6b3a20e3a94c7 not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.584199 4853 scope.go:117] "RemoveContainer" containerID="ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.603270 4853 scope.go:117] "RemoveContainer" containerID="0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.618091 4853 scope.go:117] "RemoveContainer" containerID="033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.633120 4853 scope.go:117] "RemoveContainer" containerID="ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.633552 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a\": container with ID starting with ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a not found: ID does not exist" containerID="ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.633609 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a"} err="failed to get container status \"ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a\": rpc error: code = NotFound desc = could not find container \"ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a\": container with ID starting with ebb9dd7e09e0eb4df12a6ac6283a216f73c149c48c6bba646329309b1266663a not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.633638 4853 scope.go:117] "RemoveContainer" containerID="0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.634153 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1\": container with ID starting with 0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1 not found: ID does not exist" containerID="0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.634203 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1"} err="failed to get container status \"0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1\": rpc error: code = NotFound desc = could not find container \"0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1\": container with ID starting with 0f37491fbffe62589f6b7c6d2adbe566b7beaa0ea6e01c019b5eaf7dc879c6c1 not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.634235 4853 scope.go:117] "RemoveContainer" containerID="033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.634558 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86\": container with ID starting with 033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86 not found: ID does not exist" containerID="033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.634586 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86"} err="failed to get container status \"033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86\": rpc error: code = NotFound desc = could not find container \"033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86\": container with ID starting with 033ae921206cd6f9c37838c57b65b105666b9b8a3961698fc84ab7080b13ff86 not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.634615 4853 scope.go:117] "RemoveContainer" containerID="e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.647524 4853 scope.go:117] "RemoveContainer" containerID="92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.662559 4853 scope.go:117] "RemoveContainer" containerID="0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.675244 4853 scope.go:117] "RemoveContainer" containerID="e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.675742 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2\": container with ID starting with e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2 not found: ID does not exist" containerID="e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.675779 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2"} err="failed to get container status \"e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2\": rpc error: code = NotFound desc = could not find container \"e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2\": container with ID starting with e63a2f99b2c8cf3a0de1d39faceccd53cbf0d1cf324f5397ff33274cf09ab9c2 not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.675807 4853 scope.go:117] "RemoveContainer" containerID="92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.676134 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb\": container with ID starting with 92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb not found: ID does not exist" containerID="92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.676184 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb"} err="failed to get container status \"92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb\": rpc error: code = NotFound desc = could not find container \"92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb\": container with ID starting with 92d702bc470db5b8a6fbada00aacf58293cff0772b5fa7d556a12f6775a2a8cb not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.676215 4853 scope.go:117] "RemoveContainer" containerID="0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.676543 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e\": container with ID starting with 0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e not found: ID does not exist" containerID="0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.676583 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e"} err="failed to get container status \"0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e\": rpc error: code = NotFound desc = could not find container \"0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e\": container with ID starting with 0eae55c56f0fa8fb13f8f2f2b59a50e0c8f10e983c9fc4ecdbc7ef1037ac914e not found: ID does not exist" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.676636 4853 scope.go:117] "RemoveContainer" containerID="4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.696195 4853 scope.go:117] "RemoveContainer" containerID="4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52" Dec 09 17:02:35 crc kubenswrapper[4853]: E1209 17:02:35.697865 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52\": container with ID starting with 4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52 not found: ID does not exist" containerID="4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52" Dec 09 17:02:35 crc kubenswrapper[4853]: I1209 17:02:35.697914 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52"} err="failed to get container status \"4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52\": rpc error: code = NotFound desc = could not find container \"4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52\": container with ID starting with 4d0e2c8fbdeae2fa12d97780c0ce23b479117773f7eacb96d64fecb837bc0a52 not found: ID does not exist" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.361944 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n5zk8"] Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362193 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerName="extract-utilities" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362205 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerName="extract-utilities" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362217 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" containerName="extract-content" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362223 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" containerName="extract-content" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362231 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362238 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362246 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e25b771-f015-46fc-ad94-e8b9aa6b49cb" containerName="marketplace-operator" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362251 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e25b771-f015-46fc-ad94-e8b9aa6b49cb" containerName="marketplace-operator" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362260 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerName="extract-content" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362266 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerName="extract-content" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362274 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerName="extract-content" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362280 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerName="extract-content" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362288 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed152681-91c0-40d0-be74-21f8e751080d" containerName="extract-utilities" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362296 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed152681-91c0-40d0-be74-21f8e751080d" containerName="extract-utilities" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362308 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed152681-91c0-40d0-be74-21f8e751080d" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362314 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed152681-91c0-40d0-be74-21f8e751080d" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362321 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerName="extract-utilities" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362326 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerName="extract-utilities" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362334 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" containerName="extract-utilities" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362339 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" containerName="extract-utilities" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362347 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362353 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362360 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362366 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: E1209 17:02:36.362375 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed152681-91c0-40d0-be74-21f8e751080d" containerName="extract-content" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362381 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed152681-91c0-40d0-be74-21f8e751080d" containerName="extract-content" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362461 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf894738-9ac9-49cf-a5be-c4414628c89c" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362472 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5baa3927-1796-48fe-9238-27f8717fbe89" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362482 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e25b771-f015-46fc-ad94-e8b9aa6b49cb" containerName="marketplace-operator" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362494 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b524f3-9a6d-4d43-a023-3b8deee90128" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.362503 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed152681-91c0-40d0-be74-21f8e751080d" containerName="registry-server" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.365661 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.367813 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.372207 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n5zk8"] Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.450169 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5kqd6" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.565992 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-utilities\") pod \"certified-operators-n5zk8\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.566146 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-catalog-content\") pod \"certified-operators-n5zk8\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.566666 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q98np\" (UniqueName: \"kubernetes.io/projected/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-kube-api-access-q98np\") pod \"certified-operators-n5zk8\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.667864 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q98np\" (UniqueName: \"kubernetes.io/projected/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-kube-api-access-q98np\") pod \"certified-operators-n5zk8\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.668208 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-utilities\") pod \"certified-operators-n5zk8\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.668246 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-catalog-content\") pod \"certified-operators-n5zk8\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.668858 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-utilities\") pod \"certified-operators-n5zk8\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.670897 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-catalog-content\") pod \"certified-operators-n5zk8\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.685671 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q98np\" (UniqueName: \"kubernetes.io/projected/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-kube-api-access-q98np\") pod \"certified-operators-n5zk8\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.967430 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jz7h6"] Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.968533 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.971769 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.975347 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jz7h6"] Dec 09 17:02:36 crc kubenswrapper[4853]: I1209 17:02:36.980701 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.072402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387d3b82-b9bc-4adc-a6f6-c4a9bee3b527-catalog-content\") pod \"redhat-marketplace-jz7h6\" (UID: \"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527\") " pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.072452 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z8tn\" (UniqueName: \"kubernetes.io/projected/387d3b82-b9bc-4adc-a6f6-c4a9bee3b527-kube-api-access-5z8tn\") pod \"redhat-marketplace-jz7h6\" (UID: \"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527\") " pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.072974 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387d3b82-b9bc-4adc-a6f6-c4a9bee3b527-utilities\") pod \"redhat-marketplace-jz7h6\" (UID: \"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527\") " pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.176107 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387d3b82-b9bc-4adc-a6f6-c4a9bee3b527-catalog-content\") pod \"redhat-marketplace-jz7h6\" (UID: \"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527\") " pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.176192 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z8tn\" (UniqueName: \"kubernetes.io/projected/387d3b82-b9bc-4adc-a6f6-c4a9bee3b527-kube-api-access-5z8tn\") pod \"redhat-marketplace-jz7h6\" (UID: \"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527\") " pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.176251 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387d3b82-b9bc-4adc-a6f6-c4a9bee3b527-utilities\") pod \"redhat-marketplace-jz7h6\" (UID: \"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527\") " pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.176855 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387d3b82-b9bc-4adc-a6f6-c4a9bee3b527-utilities\") pod \"redhat-marketplace-jz7h6\" (UID: \"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527\") " pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.189066 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387d3b82-b9bc-4adc-a6f6-c4a9bee3b527-catalog-content\") pod \"redhat-marketplace-jz7h6\" (UID: \"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527\") " pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.213818 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z8tn\" (UniqueName: \"kubernetes.io/projected/387d3b82-b9bc-4adc-a6f6-c4a9bee3b527-kube-api-access-5z8tn\") pod \"redhat-marketplace-jz7h6\" (UID: \"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527\") " pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.342161 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.362079 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n5zk8"] Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.456792 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5zk8" event={"ID":"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c","Type":"ContainerStarted","Data":"d0dc52ae573369dfc8989bac45a61ff92b65f2712c5d832cafe8633d2dfbfc27"} Dec 09 17:02:37 crc kubenswrapper[4853]: I1209 17:02:37.737713 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jz7h6"] Dec 09 17:02:37 crc kubenswrapper[4853]: W1209 17:02:37.747755 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod387d3b82_b9bc_4adc_a6f6_c4a9bee3b527.slice/crio-9a42791bec6567f3e48521b5ca0d763e26075f37e55b3f9d5dda032a986b7e20 WatchSource:0}: Error finding container 9a42791bec6567f3e48521b5ca0d763e26075f37e55b3f9d5dda032a986b7e20: Status 404 returned error can't find the container with id 9a42791bec6567f3e48521b5ca0d763e26075f37e55b3f9d5dda032a986b7e20 Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.462295 4853 generic.go:334] "Generic (PLEG): container finished" podID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerID="c53dcd44f47d1f3234df523ee9bfa879ed4f81c686b144e199245519d9ccf779" exitCode=0 Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.462614 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5zk8" event={"ID":"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c","Type":"ContainerDied","Data":"c53dcd44f47d1f3234df523ee9bfa879ed4f81c686b144e199245519d9ccf779"} Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.464014 4853 generic.go:334] "Generic (PLEG): container finished" podID="387d3b82-b9bc-4adc-a6f6-c4a9bee3b527" containerID="e8e15d4ed51afec35d4dc1f0e5b6aa92835163264779f34ea67150da9d08dfba" exitCode=0 Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.464040 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz7h6" event={"ID":"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527","Type":"ContainerDied","Data":"e8e15d4ed51afec35d4dc1f0e5b6aa92835163264779f34ea67150da9d08dfba"} Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.464058 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz7h6" event={"ID":"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527","Type":"ContainerStarted","Data":"9a42791bec6567f3e48521b5ca0d763e26075f37e55b3f9d5dda032a986b7e20"} Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.761281 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t4lrq"] Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.762340 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.764307 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.773284 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4lrq"] Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.899093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd9a6790-1308-4763-82a8-73c1a4ba6997-utilities\") pod \"redhat-operators-t4lrq\" (UID: \"bd9a6790-1308-4763-82a8-73c1a4ba6997\") " pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.899167 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd9a6790-1308-4763-82a8-73c1a4ba6997-catalog-content\") pod \"redhat-operators-t4lrq\" (UID: \"bd9a6790-1308-4763-82a8-73c1a4ba6997\") " pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:38 crc kubenswrapper[4853]: I1209 17:02:38.899203 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pcdm\" (UniqueName: \"kubernetes.io/projected/bd9a6790-1308-4763-82a8-73c1a4ba6997-kube-api-access-2pcdm\") pod \"redhat-operators-t4lrq\" (UID: \"bd9a6790-1308-4763-82a8-73c1a4ba6997\") " pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.000007 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd9a6790-1308-4763-82a8-73c1a4ba6997-utilities\") pod \"redhat-operators-t4lrq\" (UID: \"bd9a6790-1308-4763-82a8-73c1a4ba6997\") " pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.000060 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd9a6790-1308-4763-82a8-73c1a4ba6997-catalog-content\") pod \"redhat-operators-t4lrq\" (UID: \"bd9a6790-1308-4763-82a8-73c1a4ba6997\") " pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.000086 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pcdm\" (UniqueName: \"kubernetes.io/projected/bd9a6790-1308-4763-82a8-73c1a4ba6997-kube-api-access-2pcdm\") pod \"redhat-operators-t4lrq\" (UID: \"bd9a6790-1308-4763-82a8-73c1a4ba6997\") " pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.000558 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd9a6790-1308-4763-82a8-73c1a4ba6997-catalog-content\") pod \"redhat-operators-t4lrq\" (UID: \"bd9a6790-1308-4763-82a8-73c1a4ba6997\") " pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.000677 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd9a6790-1308-4763-82a8-73c1a4ba6997-utilities\") pod \"redhat-operators-t4lrq\" (UID: \"bd9a6790-1308-4763-82a8-73c1a4ba6997\") " pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.020152 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pcdm\" (UniqueName: \"kubernetes.io/projected/bd9a6790-1308-4763-82a8-73c1a4ba6997-kube-api-access-2pcdm\") pod \"redhat-operators-t4lrq\" (UID: \"bd9a6790-1308-4763-82a8-73c1a4ba6997\") " pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.080898 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.368454 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-462tl"] Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.370089 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.377383 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.388031 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-462tl"] Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.464585 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4lrq"] Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.506814 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-utilities\") pod \"community-operators-462tl\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.506857 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dpp\" (UniqueName: \"kubernetes.io/projected/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-kube-api-access-s2dpp\") pod \"community-operators-462tl\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.506920 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-catalog-content\") pod \"community-operators-462tl\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.607870 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-catalog-content\") pod \"community-operators-462tl\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.607932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-utilities\") pod \"community-operators-462tl\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.607964 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dpp\" (UniqueName: \"kubernetes.io/projected/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-kube-api-access-s2dpp\") pod \"community-operators-462tl\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.608347 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-catalog-content\") pod \"community-operators-462tl\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.608697 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-utilities\") pod \"community-operators-462tl\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.633001 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dpp\" (UniqueName: \"kubernetes.io/projected/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-kube-api-access-s2dpp\") pod \"community-operators-462tl\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:39 crc kubenswrapper[4853]: I1209 17:02:39.687777 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.069701 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-462tl"] Dec 09 17:02:40 crc kubenswrapper[4853]: W1209 17:02:40.078328 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb6ed4cb_a6be_4bdc_ad3e_c21dea9775ce.slice/crio-257d9fa0bae13d9775c3d503dfb14f2fc006eafe1e9ef6209c8e232509bb3ac6 WatchSource:0}: Error finding container 257d9fa0bae13d9775c3d503dfb14f2fc006eafe1e9ef6209c8e232509bb3ac6: Status 404 returned error can't find the container with id 257d9fa0bae13d9775c3d503dfb14f2fc006eafe1e9ef6209c8e232509bb3ac6 Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.488406 4853 generic.go:334] "Generic (PLEG): container finished" podID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerID="5a57adc92b122e31fb0a33408e24dbecef13eb409bdc233f852b8d356fa838d4" exitCode=0 Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.488795 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5zk8" event={"ID":"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c","Type":"ContainerDied","Data":"5a57adc92b122e31fb0a33408e24dbecef13eb409bdc233f852b8d356fa838d4"} Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.493078 4853 generic.go:334] "Generic (PLEG): container finished" podID="bd9a6790-1308-4763-82a8-73c1a4ba6997" containerID="b5b29bfc883eb70423e2a597e10eec5f3569cfcbab7522d8052207cdca2b1a30" exitCode=0 Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.493182 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4lrq" event={"ID":"bd9a6790-1308-4763-82a8-73c1a4ba6997","Type":"ContainerDied","Data":"b5b29bfc883eb70423e2a597e10eec5f3569cfcbab7522d8052207cdca2b1a30"} Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.493807 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4lrq" event={"ID":"bd9a6790-1308-4763-82a8-73c1a4ba6997","Type":"ContainerStarted","Data":"bffd25cc539181cf827023df7c33a4e4c998b85d06f4488dbe4235c196ca84ba"} Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.495987 4853 generic.go:334] "Generic (PLEG): container finished" podID="387d3b82-b9bc-4adc-a6f6-c4a9bee3b527" containerID="79d795708f82e069bf3fb5901955f37a896a70624668645964f763a35b00cd30" exitCode=0 Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.496051 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz7h6" event={"ID":"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527","Type":"ContainerDied","Data":"79d795708f82e069bf3fb5901955f37a896a70624668645964f763a35b00cd30"} Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.498061 4853 generic.go:334] "Generic (PLEG): container finished" podID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerID="8e6d7a050b5a265944bdd4fc3c85ef449a2d5b0b57cc25579f03b2d894bf06a1" exitCode=0 Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.498092 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-462tl" event={"ID":"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce","Type":"ContainerDied","Data":"8e6d7a050b5a265944bdd4fc3c85ef449a2d5b0b57cc25579f03b2d894bf06a1"} Dec 09 17:02:40 crc kubenswrapper[4853]: I1209 17:02:40.498112 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-462tl" event={"ID":"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce","Type":"ContainerStarted","Data":"257d9fa0bae13d9775c3d503dfb14f2fc006eafe1e9ef6209c8e232509bb3ac6"} Dec 09 17:02:41 crc kubenswrapper[4853]: I1209 17:02:41.280385 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-r27lt" Dec 09 17:02:41 crc kubenswrapper[4853]: I1209 17:02:41.352796 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57bs5"] Dec 09 17:02:42 crc kubenswrapper[4853]: I1209 17:02:42.508435 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz7h6" event={"ID":"387d3b82-b9bc-4adc-a6f6-c4a9bee3b527","Type":"ContainerStarted","Data":"b58491de3ef2f3decda8492620d071088abd0a9960775b756734ddcb1d297d9f"} Dec 09 17:02:42 crc kubenswrapper[4853]: I1209 17:02:42.513381 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-462tl" event={"ID":"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce","Type":"ContainerStarted","Data":"e602bfa529721ade63d780bbacedeaf60435872f2dd7a999ac4393d06039c87b"} Dec 09 17:02:42 crc kubenswrapper[4853]: I1209 17:02:42.524242 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jz7h6" podStartSLOduration=2.663361713 podStartE2EDuration="6.524223119s" podCreationTimestamp="2025-12-09 17:02:36 +0000 UTC" firstStartedPulling="2025-12-09 17:02:38.465468075 +0000 UTC m=+385.400207257" lastFinishedPulling="2025-12-09 17:02:42.326329481 +0000 UTC m=+389.261068663" observedRunningTime="2025-12-09 17:02:42.523693934 +0000 UTC m=+389.458433116" watchObservedRunningTime="2025-12-09 17:02:42.524223119 +0000 UTC m=+389.458962301" Dec 09 17:02:43 crc kubenswrapper[4853]: I1209 17:02:43.521433 4853 generic.go:334] "Generic (PLEG): container finished" podID="bd9a6790-1308-4763-82a8-73c1a4ba6997" containerID="99f1410455b587dd973b6c7165527f973ea179f64a354357339e28298688e0b1" exitCode=0 Dec 09 17:02:43 crc kubenswrapper[4853]: I1209 17:02:43.521493 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4lrq" event={"ID":"bd9a6790-1308-4763-82a8-73c1a4ba6997","Type":"ContainerDied","Data":"99f1410455b587dd973b6c7165527f973ea179f64a354357339e28298688e0b1"} Dec 09 17:02:43 crc kubenswrapper[4853]: I1209 17:02:43.524751 4853 generic.go:334] "Generic (PLEG): container finished" podID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerID="e602bfa529721ade63d780bbacedeaf60435872f2dd7a999ac4393d06039c87b" exitCode=0 Dec 09 17:02:43 crc kubenswrapper[4853]: I1209 17:02:43.524823 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-462tl" event={"ID":"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce","Type":"ContainerDied","Data":"e602bfa529721ade63d780bbacedeaf60435872f2dd7a999ac4393d06039c87b"} Dec 09 17:02:43 crc kubenswrapper[4853]: I1209 17:02:43.529528 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5zk8" event={"ID":"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c","Type":"ContainerStarted","Data":"899dc8d2628b6f674c4a5e35584c0bf6eb59945fba52c8123f75d4590132ec45"} Dec 09 17:02:43 crc kubenswrapper[4853]: I1209 17:02:43.584093 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n5zk8" podStartSLOduration=3.15034421 podStartE2EDuration="7.584073149s" podCreationTimestamp="2025-12-09 17:02:36 +0000 UTC" firstStartedPulling="2025-12-09 17:02:38.464816545 +0000 UTC m=+385.399555727" lastFinishedPulling="2025-12-09 17:02:42.898545484 +0000 UTC m=+389.833284666" observedRunningTime="2025-12-09 17:02:43.581674689 +0000 UTC m=+390.516413871" watchObservedRunningTime="2025-12-09 17:02:43.584073149 +0000 UTC m=+390.518812331" Dec 09 17:02:45 crc kubenswrapper[4853]: I1209 17:02:45.557367 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4lrq" event={"ID":"bd9a6790-1308-4763-82a8-73c1a4ba6997","Type":"ContainerStarted","Data":"6953271da5788a397d698485ebbe1de543f41ba9dedbd7cad6dca40124f962d2"} Dec 09 17:02:45 crc kubenswrapper[4853]: I1209 17:02:45.560316 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-462tl" event={"ID":"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce","Type":"ContainerStarted","Data":"d20d98d200385aaf6748fb9b0a049519c89c999b13941b8ea6b4b03bab839d7f"} Dec 09 17:02:45 crc kubenswrapper[4853]: I1209 17:02:45.582959 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t4lrq" podStartSLOduration=3.714353204 podStartE2EDuration="7.582941865s" podCreationTimestamp="2025-12-09 17:02:38 +0000 UTC" firstStartedPulling="2025-12-09 17:02:40.49494032 +0000 UTC m=+387.429679502" lastFinishedPulling="2025-12-09 17:02:44.363528981 +0000 UTC m=+391.298268163" observedRunningTime="2025-12-09 17:02:45.575088507 +0000 UTC m=+392.509827699" watchObservedRunningTime="2025-12-09 17:02:45.582941865 +0000 UTC m=+392.517681047" Dec 09 17:02:46 crc kubenswrapper[4853]: I1209 17:02:46.981552 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:46 crc kubenswrapper[4853]: I1209 17:02:46.982635 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:47 crc kubenswrapper[4853]: I1209 17:02:47.037537 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:47 crc kubenswrapper[4853]: I1209 17:02:47.057999 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-462tl" podStartSLOduration=3.954419363 podStartE2EDuration="8.05797898s" podCreationTimestamp="2025-12-09 17:02:39 +0000 UTC" firstStartedPulling="2025-12-09 17:02:40.499138512 +0000 UTC m=+387.433877694" lastFinishedPulling="2025-12-09 17:02:44.602698129 +0000 UTC m=+391.537437311" observedRunningTime="2025-12-09 17:02:45.61339726 +0000 UTC m=+392.548136452" watchObservedRunningTime="2025-12-09 17:02:47.05797898 +0000 UTC m=+393.992718172" Dec 09 17:02:47 crc kubenswrapper[4853]: I1209 17:02:47.343173 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:47 crc kubenswrapper[4853]: I1209 17:02:47.344778 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:47 crc kubenswrapper[4853]: I1209 17:02:47.385818 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:47 crc kubenswrapper[4853]: I1209 17:02:47.611356 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jz7h6" Dec 09 17:02:48 crc kubenswrapper[4853]: I1209 17:02:48.618373 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:02:49 crc kubenswrapper[4853]: I1209 17:02:49.081732 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:49 crc kubenswrapper[4853]: I1209 17:02:49.081789 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:49 crc kubenswrapper[4853]: I1209 17:02:49.688420 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:49 crc kubenswrapper[4853]: I1209 17:02:49.688771 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:49 crc kubenswrapper[4853]: I1209 17:02:49.723483 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:50 crc kubenswrapper[4853]: I1209 17:02:50.127820 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t4lrq" podUID="bd9a6790-1308-4763-82a8-73c1a4ba6997" containerName="registry-server" probeResult="failure" output=< Dec 09 17:02:50 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 17:02:50 crc kubenswrapper[4853]: > Dec 09 17:02:50 crc kubenswrapper[4853]: I1209 17:02:50.630082 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-462tl" Dec 09 17:02:58 crc kubenswrapper[4853]: I1209 17:02:58.593033 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:02:58 crc kubenswrapper[4853]: I1209 17:02:58.593477 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:02:59 crc kubenswrapper[4853]: I1209 17:02:59.132335 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:02:59 crc kubenswrapper[4853]: I1209 17:02:59.191275 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t4lrq" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.024294 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd"] Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.026006 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.028458 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.028772 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.029126 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.029289 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.029666 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.035553 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd"] Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.166166 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2c841f12-00a0-439b-b2de-9e10b14be419-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2qpvd\" (UID: \"2c841f12-00a0-439b-b2de-9e10b14be419\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.166235 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c841f12-00a0-439b-b2de-9e10b14be419-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2qpvd\" (UID: \"2c841f12-00a0-439b-b2de-9e10b14be419\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.166294 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2psvh\" (UniqueName: \"kubernetes.io/projected/2c841f12-00a0-439b-b2de-9e10b14be419-kube-api-access-2psvh\") pod \"cluster-monitoring-operator-6d5b84845-2qpvd\" (UID: \"2c841f12-00a0-439b-b2de-9e10b14be419\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.267648 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2c841f12-00a0-439b-b2de-9e10b14be419-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2qpvd\" (UID: \"2c841f12-00a0-439b-b2de-9e10b14be419\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.267702 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c841f12-00a0-439b-b2de-9e10b14be419-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2qpvd\" (UID: \"2c841f12-00a0-439b-b2de-9e10b14be419\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.267761 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2psvh\" (UniqueName: \"kubernetes.io/projected/2c841f12-00a0-439b-b2de-9e10b14be419-kube-api-access-2psvh\") pod \"cluster-monitoring-operator-6d5b84845-2qpvd\" (UID: \"2c841f12-00a0-439b-b2de-9e10b14be419\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.268919 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2c841f12-00a0-439b-b2de-9e10b14be419-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2qpvd\" (UID: \"2c841f12-00a0-439b-b2de-9e10b14be419\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.276462 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c841f12-00a0-439b-b2de-9e10b14be419-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2qpvd\" (UID: \"2c841f12-00a0-439b-b2de-9e10b14be419\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.289340 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2psvh\" (UniqueName: \"kubernetes.io/projected/2c841f12-00a0-439b-b2de-9e10b14be419-kube-api-access-2psvh\") pod \"cluster-monitoring-operator-6d5b84845-2qpvd\" (UID: \"2c841f12-00a0-439b-b2de-9e10b14be419\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.351934 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" Dec 09 17:03:04 crc kubenswrapper[4853]: I1209 17:03:04.785550 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd"] Dec 09 17:03:04 crc kubenswrapper[4853]: W1209 17:03:04.790826 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c841f12_00a0_439b_b2de_9e10b14be419.slice/crio-da11925718e7abd4c15a725191475d607f7e46066919e624a13d737a3135be2a WatchSource:0}: Error finding container da11925718e7abd4c15a725191475d607f7e46066919e624a13d737a3135be2a: Status 404 returned error can't find the container with id da11925718e7abd4c15a725191475d607f7e46066919e624a13d737a3135be2a Dec 09 17:03:05 crc kubenswrapper[4853]: I1209 17:03:05.668943 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" event={"ID":"2c841f12-00a0-439b-b2de-9e10b14be419","Type":"ContainerStarted","Data":"da11925718e7abd4c15a725191475d607f7e46066919e624a13d737a3135be2a"} Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.404520 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" podUID="d6ab11c3-427d-46dd-83e4-038afc30574a" containerName="registry" containerID="cri-o://73213c5761a8f674271e53f0d4d16c38ce03c1c97df374a5ebb39866348dd9bc" gracePeriod=30 Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.677231 4853 generic.go:334] "Generic (PLEG): container finished" podID="d6ab11c3-427d-46dd-83e4-038afc30574a" containerID="73213c5761a8f674271e53f0d4d16c38ce03c1c97df374a5ebb39866348dd9bc" exitCode=0 Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.677334 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" event={"ID":"d6ab11c3-427d-46dd-83e4-038afc30574a","Type":"ContainerDied","Data":"73213c5761a8f674271e53f0d4d16c38ce03c1c97df374a5ebb39866348dd9bc"} Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.814072 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.903290 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6ab11c3-427d-46dd-83e4-038afc30574a-installation-pull-secrets\") pod \"d6ab11c3-427d-46dd-83e4-038afc30574a\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.903582 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d6ab11c3-427d-46dd-83e4-038afc30574a\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.903651 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-bound-sa-token\") pod \"d6ab11c3-427d-46dd-83e4-038afc30574a\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.903797 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrpf4\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-kube-api-access-vrpf4\") pod \"d6ab11c3-427d-46dd-83e4-038afc30574a\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.903832 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-certificates\") pod \"d6ab11c3-427d-46dd-83e4-038afc30574a\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.903885 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6ab11c3-427d-46dd-83e4-038afc30574a-ca-trust-extracted\") pod \"d6ab11c3-427d-46dd-83e4-038afc30574a\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.903914 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-trusted-ca\") pod \"d6ab11c3-427d-46dd-83e4-038afc30574a\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.903958 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-tls\") pod \"d6ab11c3-427d-46dd-83e4-038afc30574a\" (UID: \"d6ab11c3-427d-46dd-83e4-038afc30574a\") " Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.905416 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d6ab11c3-427d-46dd-83e4-038afc30574a" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.905822 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d6ab11c3-427d-46dd-83e4-038afc30574a" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.911643 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-kube-api-access-vrpf4" (OuterVolumeSpecName: "kube-api-access-vrpf4") pod "d6ab11c3-427d-46dd-83e4-038afc30574a" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a"). InnerVolumeSpecName "kube-api-access-vrpf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.912481 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d6ab11c3-427d-46dd-83e4-038afc30574a" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.912504 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ab11c3-427d-46dd-83e4-038afc30574a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d6ab11c3-427d-46dd-83e4-038afc30574a" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.913482 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d6ab11c3-427d-46dd-83e4-038afc30574a" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.918642 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d6ab11c3-427d-46dd-83e4-038afc30574a" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 17:03:06 crc kubenswrapper[4853]: I1209 17:03:06.932099 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ab11c3-427d-46dd-83e4-038afc30574a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d6ab11c3-427d-46dd-83e4-038afc30574a" (UID: "d6ab11c3-427d-46dd-83e4-038afc30574a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.005591 4853 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d6ab11c3-427d-46dd-83e4-038afc30574a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.005647 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.005659 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrpf4\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-kube-api-access-vrpf4\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.005670 4853 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.005680 4853 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d6ab11c3-427d-46dd-83e4-038afc30574a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.005691 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ab11c3-427d-46dd-83e4-038afc30574a-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.005703 4853 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d6ab11c3-427d-46dd-83e4-038afc30574a-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.160964 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp"] Dec 09 17:03:07 crc kubenswrapper[4853]: E1209 17:03:07.161395 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ab11c3-427d-46dd-83e4-038afc30574a" containerName="registry" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.161407 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ab11c3-427d-46dd-83e4-038afc30574a" containerName="registry" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.161488 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ab11c3-427d-46dd-83e4-038afc30574a" containerName="registry" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.161894 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.164561 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-s6ztz" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.165247 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.173204 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp"] Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.310113 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/383b0c3b-256d-41f5-95c8-0a1719c221df-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-sp7xp\" (UID: \"383b0c3b-256d-41f5-95c8-0a1719c221df\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.411484 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/383b0c3b-256d-41f5-95c8-0a1719c221df-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-sp7xp\" (UID: \"383b0c3b-256d-41f5-95c8-0a1719c221df\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.417662 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/383b0c3b-256d-41f5-95c8-0a1719c221df-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-sp7xp\" (UID: \"383b0c3b-256d-41f5-95c8-0a1719c221df\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.478179 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.688508 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" event={"ID":"2c841f12-00a0-439b-b2de-9e10b14be419","Type":"ContainerStarted","Data":"3b4eed3ef78bc71f72eac7912e65cd152efcea996d16c0c73a2234e33e336e08"} Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.689932 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" event={"ID":"d6ab11c3-427d-46dd-83e4-038afc30574a","Type":"ContainerDied","Data":"31b390e8e7fad87bf5b0264658bc53b8351d8117c4f3ab5b1993bc2dbf558f5c"} Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.689974 4853 scope.go:117] "RemoveContainer" containerID="73213c5761a8f674271e53f0d4d16c38ce03c1c97df374a5ebb39866348dd9bc" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.689978 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-57bs5" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.707998 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2qpvd" podStartSLOduration=1.931650707 podStartE2EDuration="3.707979901s" podCreationTimestamp="2025-12-09 17:03:04 +0000 UTC" firstStartedPulling="2025-12-09 17:03:04.793402982 +0000 UTC m=+411.728142164" lastFinishedPulling="2025-12-09 17:03:06.569732176 +0000 UTC m=+413.504471358" observedRunningTime="2025-12-09 17:03:07.705542625 +0000 UTC m=+414.640281817" watchObservedRunningTime="2025-12-09 17:03:07.707979901 +0000 UTC m=+414.642719083" Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.721713 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57bs5"] Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.727076 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-57bs5"] Dec 09 17:03:07 crc kubenswrapper[4853]: I1209 17:03:07.880332 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp"] Dec 09 17:03:08 crc kubenswrapper[4853]: I1209 17:03:08.695353 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" event={"ID":"383b0c3b-256d-41f5-95c8-0a1719c221df","Type":"ContainerStarted","Data":"464066733db695986d15352e65c6611021866e43b4f59fcdb0edaf959d1a755d"} Dec 09 17:03:09 crc kubenswrapper[4853]: I1209 17:03:09.577702 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ab11c3-427d-46dd-83e4-038afc30574a" path="/var/lib/kubelet/pods/d6ab11c3-427d-46dd-83e4-038afc30574a/volumes" Dec 09 17:03:10 crc kubenswrapper[4853]: I1209 17:03:10.708026 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" event={"ID":"383b0c3b-256d-41f5-95c8-0a1719c221df","Type":"ContainerStarted","Data":"ced6bce81440fdee6345b7752e019f6ea2ddb0409fd9c3a7da2de046400e7939"} Dec 09 17:03:10 crc kubenswrapper[4853]: I1209 17:03:10.708285 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" Dec 09 17:03:10 crc kubenswrapper[4853]: I1209 17:03:10.714416 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" Dec 09 17:03:10 crc kubenswrapper[4853]: I1209 17:03:10.723127 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-sp7xp" podStartSLOduration=2.098498938 podStartE2EDuration="3.723110168s" podCreationTimestamp="2025-12-09 17:03:07 +0000 UTC" firstStartedPulling="2025-12-09 17:03:07.892704694 +0000 UTC m=+414.827443916" lastFinishedPulling="2025-12-09 17:03:09.517315954 +0000 UTC m=+416.452055146" observedRunningTime="2025-12-09 17:03:10.722928033 +0000 UTC m=+417.657667205" watchObservedRunningTime="2025-12-09 17:03:10.723110168 +0000 UTC m=+417.657849350" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.321503 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-s4k25"] Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.322494 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.325469 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.325705 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.327290 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.328487 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-s5pmz" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.332985 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-s4k25"] Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.470687 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8p9t\" (UniqueName: \"kubernetes.io/projected/daef6eab-ff8f-4228-b064-8b73494d5472-kube-api-access-f8p9t\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.470765 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/daef6eab-ff8f-4228-b064-8b73494d5472-metrics-client-ca\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.470793 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/daef6eab-ff8f-4228-b064-8b73494d5472-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.470832 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/daef6eab-ff8f-4228-b064-8b73494d5472-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.572129 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/daef6eab-ff8f-4228-b064-8b73494d5472-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.572219 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/daef6eab-ff8f-4228-b064-8b73494d5472-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.572253 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8p9t\" (UniqueName: \"kubernetes.io/projected/daef6eab-ff8f-4228-b064-8b73494d5472-kube-api-access-f8p9t\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.572327 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/daef6eab-ff8f-4228-b064-8b73494d5472-metrics-client-ca\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.573940 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/daef6eab-ff8f-4228-b064-8b73494d5472-metrics-client-ca\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.579402 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/daef6eab-ff8f-4228-b064-8b73494d5472-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.579440 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/daef6eab-ff8f-4228-b064-8b73494d5472-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.591301 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8p9t\" (UniqueName: \"kubernetes.io/projected/daef6eab-ff8f-4228-b064-8b73494d5472-kube-api-access-f8p9t\") pod \"prometheus-operator-db54df47d-s4k25\" (UID: \"daef6eab-ff8f-4228-b064-8b73494d5472\") " pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.686914 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" Dec 09 17:03:11 crc kubenswrapper[4853]: I1209 17:03:11.885005 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-s4k25"] Dec 09 17:03:11 crc kubenswrapper[4853]: W1209 17:03:11.892735 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaef6eab_ff8f_4228_b064_8b73494d5472.slice/crio-f5256a739e5e0dc07fa3a43df9deb9ab8716460013321be4be571c75930ab34a WatchSource:0}: Error finding container f5256a739e5e0dc07fa3a43df9deb9ab8716460013321be4be571c75930ab34a: Status 404 returned error can't find the container with id f5256a739e5e0dc07fa3a43df9deb9ab8716460013321be4be571c75930ab34a Dec 09 17:03:12 crc kubenswrapper[4853]: I1209 17:03:12.718867 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" event={"ID":"daef6eab-ff8f-4228-b064-8b73494d5472","Type":"ContainerStarted","Data":"f5256a739e5e0dc07fa3a43df9deb9ab8716460013321be4be571c75930ab34a"} Dec 09 17:03:14 crc kubenswrapper[4853]: I1209 17:03:14.747727 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" event={"ID":"daef6eab-ff8f-4228-b064-8b73494d5472","Type":"ContainerStarted","Data":"7e66d81855731c30ce017ae9b0b8fb8ba5f98efabfa9702da31e4c99edc8ef26"} Dec 09 17:03:16 crc kubenswrapper[4853]: I1209 17:03:16.762431 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" event={"ID":"daef6eab-ff8f-4228-b064-8b73494d5472","Type":"ContainerStarted","Data":"23ead5bb37b7987b725463527ee36fe4b24690fe7e0852da62009211ab52eda9"} Dec 09 17:03:16 crc kubenswrapper[4853]: I1209 17:03:16.787586 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-s4k25" podStartSLOduration=3.121476801 podStartE2EDuration="5.787551219s" podCreationTimestamp="2025-12-09 17:03:11 +0000 UTC" firstStartedPulling="2025-12-09 17:03:11.894976148 +0000 UTC m=+418.829715330" lastFinishedPulling="2025-12-09 17:03:14.561050576 +0000 UTC m=+421.495789748" observedRunningTime="2025-12-09 17:03:16.782108363 +0000 UTC m=+423.716847565" watchObservedRunningTime="2025-12-09 17:03:16.787551219 +0000 UTC m=+423.722290441" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.692008 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r"] Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.693282 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.695415 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.699739 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-75z25" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.700661 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.715385 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l"] Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.716809 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.718474 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.720692 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.720927 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.721442 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-685g6" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.739865 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l"] Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.752257 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-qfbbt"] Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.754148 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.758409 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.758648 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.758859 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-ctl9j" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.768891 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r"] Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.769247 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5cr2\" (UniqueName: \"kubernetes.io/projected/03498a04-f751-4cf1-abf2-bd69a78c0ba1-kube-api-access-w5cr2\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.769308 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/03498a04-f751-4cf1-abf2-bd69a78c0ba1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.769416 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03498a04-f751-4cf1-abf2-bd69a78c0ba1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.769457 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03498a04-f751-4cf1-abf2-bd69a78c0ba1-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870372 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-root\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870447 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-wtmp\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870497 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/11268aeb-b600-4661-8543-73396c30ec48-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870528 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03498a04-f751-4cf1-abf2-bd69a78c0ba1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870554 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/11268aeb-b600-4661-8543-73396c30ec48-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03498a04-f751-4cf1-abf2-bd69a78c0ba1-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870617 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj8cx\" (UniqueName: \"kubernetes.io/projected/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-kube-api-access-pj8cx\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870632 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-sys\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870651 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5cr2\" (UniqueName: \"kubernetes.io/projected/03498a04-f751-4cf1-abf2-bd69a78c0ba1-kube-api-access-w5cr2\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870668 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-metrics-client-ca\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870699 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/11268aeb-b600-4661-8543-73396c30ec48-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870718 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/03498a04-f751-4cf1-abf2-bd69a78c0ba1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870735 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-tls\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870755 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/11268aeb-b600-4661-8543-73396c30ec48-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870771 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-textfile\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870797 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/11268aeb-b600-4661-8543-73396c30ec48-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.870815 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxh6\" (UniqueName: \"kubernetes.io/projected/11268aeb-b600-4661-8543-73396c30ec48-kube-api-access-wxxh6\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: E1209 17:03:18.870955 4853 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Dec 09 17:03:18 crc kubenswrapper[4853]: E1209 17:03:18.871001 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03498a04-f751-4cf1-abf2-bd69a78c0ba1-openshift-state-metrics-tls podName:03498a04-f751-4cf1-abf2-bd69a78c0ba1 nodeName:}" failed. No retries permitted until 2025-12-09 17:03:19.370984722 +0000 UTC m=+426.305723904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/03498a04-f751-4cf1-abf2-bd69a78c0ba1-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-kwk7r" (UID: "03498a04-f751-4cf1-abf2-bd69a78c0ba1") : secret "openshift-state-metrics-tls" not found Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.872302 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03498a04-f751-4cf1-abf2-bd69a78c0ba1-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.880943 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/03498a04-f751-4cf1-abf2-bd69a78c0ba1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.893807 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5cr2\" (UniqueName: \"kubernetes.io/projected/03498a04-f751-4cf1-abf2-bd69a78c0ba1-kube-api-access-w5cr2\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.971853 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-wtmp\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972227 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/11268aeb-b600-4661-8543-73396c30ec48-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972320 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/11268aeb-b600-4661-8543-73396c30ec48-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972308 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-wtmp\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972370 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-sys\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972414 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-sys\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972457 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj8cx\" (UniqueName: \"kubernetes.io/projected/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-kube-api-access-pj8cx\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-metrics-client-ca\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972645 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/11268aeb-b600-4661-8543-73396c30ec48-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972746 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-tls\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.972820 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/11268aeb-b600-4661-8543-73396c30ec48-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.973280 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-metrics-client-ca\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.973572 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/11268aeb-b600-4661-8543-73396c30ec48-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.973626 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-textfile\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.973670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/11268aeb-b600-4661-8543-73396c30ec48-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.973742 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxh6\" (UniqueName: \"kubernetes.io/projected/11268aeb-b600-4661-8543-73396c30ec48-kube-api-access-wxxh6\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.973883 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-root\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.973906 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.974006 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-textfile\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.974082 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-root\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.974287 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/11268aeb-b600-4661-8543-73396c30ec48-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.974848 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/11268aeb-b600-4661-8543-73396c30ec48-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.976388 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/11268aeb-b600-4661-8543-73396c30ec48-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.977007 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-tls\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.978888 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/11268aeb-b600-4661-8543-73396c30ec48-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.979611 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:18 crc kubenswrapper[4853]: I1209 17:03:18.998315 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj8cx\" (UniqueName: \"kubernetes.io/projected/2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e-kube-api-access-pj8cx\") pod \"node-exporter-qfbbt\" (UID: \"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e\") " pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.002791 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxh6\" (UniqueName: \"kubernetes.io/projected/11268aeb-b600-4661-8543-73396c30ec48-kube-api-access-wxxh6\") pod \"kube-state-metrics-777cb5bd5d-9wl4l\" (UID: \"11268aeb-b600-4661-8543-73396c30ec48\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.031275 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.068706 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qfbbt" Dec 09 17:03:19 crc kubenswrapper[4853]: W1209 17:03:19.095097 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d33cd3b_bcd7_47bb_ac7f_a932d4c1101e.slice/crio-8b715301929098da487298dc8b3dd682080c75d52e03540312bd67bdcd4b5ac2 WatchSource:0}: Error finding container 8b715301929098da487298dc8b3dd682080c75d52e03540312bd67bdcd4b5ac2: Status 404 returned error can't find the container with id 8b715301929098da487298dc8b3dd682080c75d52e03540312bd67bdcd4b5ac2 Dec 09 17:03:19 crc kubenswrapper[4853]: W1209 17:03:19.296004 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11268aeb_b600_4661_8543_73396c30ec48.slice/crio-009b670989e666b60fe03bc56c97686efbcd207785b3d9c524f8e837f5662818 WatchSource:0}: Error finding container 009b670989e666b60fe03bc56c97686efbcd207785b3d9c524f8e837f5662818: Status 404 returned error can't find the container with id 009b670989e666b60fe03bc56c97686efbcd207785b3d9c524f8e837f5662818 Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.297530 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l"] Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.382075 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03498a04-f751-4cf1-abf2-bd69a78c0ba1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.389584 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/03498a04-f751-4cf1-abf2-bd69a78c0ba1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-kwk7r\" (UID: \"03498a04-f751-4cf1-abf2-bd69a78c0ba1\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.614442 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.781796 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" event={"ID":"11268aeb-b600-4661-8543-73396c30ec48","Type":"ContainerStarted","Data":"009b670989e666b60fe03bc56c97686efbcd207785b3d9c524f8e837f5662818"} Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.782787 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qfbbt" event={"ID":"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e","Type":"ContainerStarted","Data":"8b715301929098da487298dc8b3dd682080c75d52e03540312bd67bdcd4b5ac2"} Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.831833 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.835018 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.836762 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.837566 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.837745 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.837949 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.838432 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.840454 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.840768 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-78qch" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.841005 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.848094 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.869367 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.888338 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.888802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/edf4edcd-1366-40ca-8127-20d4efcc9b0c-config-out\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.888833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/edf4edcd-1366-40ca-8127-20d4efcc9b0c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.888854 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edf4edcd-1366-40ca-8127-20d4efcc9b0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.888884 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.888907 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-config-volume\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.888934 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.889249 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.889375 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7td8\" (UniqueName: \"kubernetes.io/projected/edf4edcd-1366-40ca-8127-20d4efcc9b0c-kube-api-access-b7td8\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.889416 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/edf4edcd-1366-40ca-8127-20d4efcc9b0c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.889510 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-web-config\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.889588 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/edf4edcd-1366-40ca-8127-20d4efcc9b0c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.990989 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991061 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/edf4edcd-1366-40ca-8127-20d4efcc9b0c-config-out\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991102 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/edf4edcd-1366-40ca-8127-20d4efcc9b0c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991129 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edf4edcd-1366-40ca-8127-20d4efcc9b0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-config-volume\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991231 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991279 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991304 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7td8\" (UniqueName: \"kubernetes.io/projected/edf4edcd-1366-40ca-8127-20d4efcc9b0c-kube-api-access-b7td8\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991321 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/edf4edcd-1366-40ca-8127-20d4efcc9b0c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991354 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-web-config\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.991380 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/edf4edcd-1366-40ca-8127-20d4efcc9b0c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.992161 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/edf4edcd-1366-40ca-8127-20d4efcc9b0c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.993384 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edf4edcd-1366-40ca-8127-20d4efcc9b0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.995830 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-config-volume\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.995954 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/edf4edcd-1366-40ca-8127-20d4efcc9b0c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.996100 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/edf4edcd-1366-40ca-8127-20d4efcc9b0c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.996723 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-web-config\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.997294 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.997550 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.998168 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:19 crc kubenswrapper[4853]: I1209 17:03:19.998937 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/edf4edcd-1366-40ca-8127-20d4efcc9b0c-config-out\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.009553 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/edf4edcd-1366-40ca-8127-20d4efcc9b0c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.011511 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7td8\" (UniqueName: \"kubernetes.io/projected/edf4edcd-1366-40ca-8127-20d4efcc9b0c-kube-api-access-b7td8\") pod \"alertmanager-main-0\" (UID: \"edf4edcd-1366-40ca-8127-20d4efcc9b0c\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.082874 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r"] Dec 09 17:03:20 crc kubenswrapper[4853]: W1209 17:03:20.090914 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03498a04_f751_4cf1_abf2_bd69a78c0ba1.slice/crio-8d220976eee143a07f0f541f1b8c658f209022f6e06f4611c7f93c4ffbfa2804 WatchSource:0}: Error finding container 8d220976eee143a07f0f541f1b8c658f209022f6e06f4611c7f93c4ffbfa2804: Status 404 returned error can't find the container with id 8d220976eee143a07f0f541f1b8c658f209022f6e06f4611c7f93c4ffbfa2804 Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.152026 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.668419 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.790263 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-9ddd48f4-7fb22"] Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.794421 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.802237 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.803534 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.807139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" event={"ID":"03498a04-f751-4cf1-abf2-bd69a78c0ba1","Type":"ContainerStarted","Data":"acdaad29f68e90a93c7d611434b230479241315783c7096c02b88367cd2fddc5"} Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.807198 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" event={"ID":"03498a04-f751-4cf1-abf2-bd69a78c0ba1","Type":"ContainerStarted","Data":"cd218fbee63f5ecc056e0e22e4fba2e299a6648fd89b2aa7d2220da000323421"} Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.807210 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" event={"ID":"03498a04-f751-4cf1-abf2-bd69a78c0ba1","Type":"ContainerStarted","Data":"8d220976eee143a07f0f541f1b8c658f209022f6e06f4611c7f93c4ffbfa2804"} Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.807475 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.807682 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.807786 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-g4tb6" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.807937 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-enoq2gs6a5kud" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.807991 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.812302 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-9ddd48f4-7fb22"] Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.905496 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.905561 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.905619 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-tls\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.905741 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smgbq\" (UniqueName: \"kubernetes.io/projected/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-kube-api-access-smgbq\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.905839 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.905956 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-metrics-client-ca\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.906250 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:20 crc kubenswrapper[4853]: I1209 17:03:20.906460 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-grpc-tls\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:20 crc kubenswrapper[4853]: W1209 17:03:20.966193 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedf4edcd_1366_40ca_8127_20d4efcc9b0c.slice/crio-fcb67aee783c9e36a6bff9247043b89356e9f74dbcb50cbfe8ec938250b5ec51 WatchSource:0}: Error finding container fcb67aee783c9e36a6bff9247043b89356e9f74dbcb50cbfe8ec938250b5ec51: Status 404 returned error can't find the container with id fcb67aee783c9e36a6bff9247043b89356e9f74dbcb50cbfe8ec938250b5ec51 Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.008034 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.008120 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-grpc-tls\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.008150 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.008180 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.008205 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-tls\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.009106 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smgbq\" (UniqueName: \"kubernetes.io/projected/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-kube-api-access-smgbq\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.009152 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.009208 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-metrics-client-ca\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.010075 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-metrics-client-ca\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.015441 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-tls\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.015504 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.015536 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.015931 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.016176 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.017575 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-secret-grpc-tls\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.029769 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smgbq\" (UniqueName: \"kubernetes.io/projected/64f61264-80ba-4c5f-9f0c-2b5eb65b89d8-kube-api-access-smgbq\") pod \"thanos-querier-9ddd48f4-7fb22\" (UID: \"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8\") " pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.125457 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.822199 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" event={"ID":"11268aeb-b600-4661-8543-73396c30ec48","Type":"ContainerStarted","Data":"99868cd1593a20b89d9ef36c270661c443e05e2a437be1d13181db712927940d"} Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.855398 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qfbbt" event={"ID":"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e","Type":"ContainerStarted","Data":"a69dc7e30b665f976f82609a69b7a0ee5e7f30a71e5da144b2bada3976145ae9"} Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.859785 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"edf4edcd-1366-40ca-8127-20d4efcc9b0c","Type":"ContainerStarted","Data":"fcb67aee783c9e36a6bff9247043b89356e9f74dbcb50cbfe8ec938250b5ec51"} Dec 09 17:03:21 crc kubenswrapper[4853]: I1209 17:03:21.960162 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-9ddd48f4-7fb22"] Dec 09 17:03:22 crc kubenswrapper[4853]: I1209 17:03:22.868139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" event={"ID":"11268aeb-b600-4661-8543-73396c30ec48","Type":"ContainerStarted","Data":"d74bb6eb668316770e7363e76430380604090eb875ef44a3fdc0a670efe12f8d"} Dec 09 17:03:22 crc kubenswrapper[4853]: I1209 17:03:22.868300 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" event={"ID":"11268aeb-b600-4661-8543-73396c30ec48","Type":"ContainerStarted","Data":"685274e16c0d1700b83fa8bdae7efb9b875ac01e15603b530da0f8f729a0b0e5"} Dec 09 17:03:22 crc kubenswrapper[4853]: I1209 17:03:22.869442 4853 generic.go:334] "Generic (PLEG): container finished" podID="2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e" containerID="a69dc7e30b665f976f82609a69b7a0ee5e7f30a71e5da144b2bada3976145ae9" exitCode=0 Dec 09 17:03:22 crc kubenswrapper[4853]: I1209 17:03:22.869471 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qfbbt" event={"ID":"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e","Type":"ContainerDied","Data":"a69dc7e30b665f976f82609a69b7a0ee5e7f30a71e5da144b2bada3976145ae9"} Dec 09 17:03:22 crc kubenswrapper[4853]: I1209 17:03:22.870431 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" event={"ID":"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8","Type":"ContainerStarted","Data":"c5116d18f59089c3a7b25506eaebda8fe201ea9346c3bab0bd47d14247985873"} Dec 09 17:03:22 crc kubenswrapper[4853]: I1209 17:03:22.890997 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-9wl4l" podStartSLOduration=2.737761991 podStartE2EDuration="4.890978512s" podCreationTimestamp="2025-12-09 17:03:18 +0000 UTC" firstStartedPulling="2025-12-09 17:03:19.299997412 +0000 UTC m=+426.234736594" lastFinishedPulling="2025-12-09 17:03:21.453213933 +0000 UTC m=+428.387953115" observedRunningTime="2025-12-09 17:03:22.887705024 +0000 UTC m=+429.822444206" watchObservedRunningTime="2025-12-09 17:03:22.890978512 +0000 UTC m=+429.825717684" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.514360 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fd7cb74df-c9w9d"] Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.515635 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.538174 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fd7cb74df-c9w9d"] Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.659384 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrw6r\" (UniqueName: \"kubernetes.io/projected/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-kube-api-access-jrw6r\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.659468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-trusted-ca-bundle\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.659547 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-oauth-config\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.659566 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-serving-cert\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.659591 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-config\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.659650 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-service-ca\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.659680 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-oauth-serving-cert\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.761653 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrw6r\" (UniqueName: \"kubernetes.io/projected/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-kube-api-access-jrw6r\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.762259 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-trusted-ca-bundle\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.762348 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-oauth-config\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.762379 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-serving-cert\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.762414 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-config\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.762453 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-service-ca\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.762503 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-oauth-serving-cert\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.763252 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-config\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.763763 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-service-ca\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.763981 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-trusted-ca-bundle\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.764948 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-oauth-serving-cert\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.767752 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-oauth-config\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.767821 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-serving-cert\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.778318 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrw6r\" (UniqueName: \"kubernetes.io/projected/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-kube-api-access-jrw6r\") pod \"console-fd7cb74df-c9w9d\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.838066 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.879092 4853 generic.go:334] "Generic (PLEG): container finished" podID="edf4edcd-1366-40ca-8127-20d4efcc9b0c" containerID="f178f6a06a38c84505882cad33eb22b3a426d21e60d147c2f9855a3ad62a7d71" exitCode=0 Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.879206 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"edf4edcd-1366-40ca-8127-20d4efcc9b0c","Type":"ContainerDied","Data":"f178f6a06a38c84505882cad33eb22b3a426d21e60d147c2f9855a3ad62a7d71"} Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.887794 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" event={"ID":"03498a04-f751-4cf1-abf2-bd69a78c0ba1","Type":"ContainerStarted","Data":"2dadd77f9d244140082beafc00fcec1e0c18cd7ccf9fcb5ac782ff8f5ccc51c4"} Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.892885 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qfbbt" event={"ID":"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e","Type":"ContainerStarted","Data":"f59898e0be19ed0aa058f251aca63270b793ecb3a310992a17150ffc0552e533"} Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.892955 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qfbbt" event={"ID":"2d33cd3b-bcd7-47bb-ac7f-a932d4c1101e","Type":"ContainerStarted","Data":"e24d9a77638574a0c76db9b0d9808a1e78e7d7e9a458d7317f527f8d0d9d0720"} Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.949715 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-kwk7r" podStartSLOduration=3.470259303 podStartE2EDuration="5.949696166s" podCreationTimestamp="2025-12-09 17:03:18 +0000 UTC" firstStartedPulling="2025-12-09 17:03:20.395905337 +0000 UTC m=+427.330644509" lastFinishedPulling="2025-12-09 17:03:22.8753422 +0000 UTC m=+429.810081372" observedRunningTime="2025-12-09 17:03:23.940454727 +0000 UTC m=+430.875193909" watchObservedRunningTime="2025-12-09 17:03:23.949696166 +0000 UTC m=+430.884435348" Dec 09 17:03:23 crc kubenswrapper[4853]: I1209 17:03:23.963843 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-qfbbt" podStartSLOduration=3.610258921 podStartE2EDuration="5.963795785s" podCreationTimestamp="2025-12-09 17:03:18 +0000 UTC" firstStartedPulling="2025-12-09 17:03:19.098807805 +0000 UTC m=+426.033546987" lastFinishedPulling="2025-12-09 17:03:21.452344669 +0000 UTC m=+428.387083851" observedRunningTime="2025-12-09 17:03:23.963182799 +0000 UTC m=+430.897921981" watchObservedRunningTime="2025-12-09 17:03:23.963795785 +0000 UTC m=+430.898534987" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.053228 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-776466597f-vpxr8"] Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.054476 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.060410 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.060482 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.060670 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.060848 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-qfp2z" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.061403 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.061507 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-d0dnsnf4373k9" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.092567 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-776466597f-vpxr8"] Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.174789 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fd3da79d-116b-48c5-bf0a-82e96c169e21-metrics-server-audit-profiles\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.174961 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fd3da79d-116b-48c5-bf0a-82e96c169e21-secret-metrics-client-certs\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.175102 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fd3da79d-116b-48c5-bf0a-82e96c169e21-secret-metrics-server-tls\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.175138 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fd3da79d-116b-48c5-bf0a-82e96c169e21-audit-log\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.175210 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3da79d-116b-48c5-bf0a-82e96c169e21-client-ca-bundle\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.175250 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd3da79d-116b-48c5-bf0a-82e96c169e21-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.175281 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxxb\" (UniqueName: \"kubernetes.io/projected/fd3da79d-116b-48c5-bf0a-82e96c169e21-kube-api-access-9nxxb\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.277127 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3da79d-116b-48c5-bf0a-82e96c169e21-client-ca-bundle\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.278062 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd3da79d-116b-48c5-bf0a-82e96c169e21-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.278115 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxxb\" (UniqueName: \"kubernetes.io/projected/fd3da79d-116b-48c5-bf0a-82e96c169e21-kube-api-access-9nxxb\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.278217 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fd3da79d-116b-48c5-bf0a-82e96c169e21-metrics-server-audit-profiles\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.278288 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fd3da79d-116b-48c5-bf0a-82e96c169e21-secret-metrics-client-certs\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.278451 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fd3da79d-116b-48c5-bf0a-82e96c169e21-secret-metrics-server-tls\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.278513 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fd3da79d-116b-48c5-bf0a-82e96c169e21-audit-log\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.279115 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fd3da79d-116b-48c5-bf0a-82e96c169e21-audit-log\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.279143 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd3da79d-116b-48c5-bf0a-82e96c169e21-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.281591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fd3da79d-116b-48c5-bf0a-82e96c169e21-metrics-server-audit-profiles\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.286866 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fd3da79d-116b-48c5-bf0a-82e96c169e21-secret-metrics-server-tls\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.287252 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fd3da79d-116b-48c5-bf0a-82e96c169e21-secret-metrics-client-certs\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.290070 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd3da79d-116b-48c5-bf0a-82e96c169e21-client-ca-bundle\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.297257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxxb\" (UniqueName: \"kubernetes.io/projected/fd3da79d-116b-48c5-bf0a-82e96c169e21-kube-api-access-9nxxb\") pod \"metrics-server-776466597f-vpxr8\" (UID: \"fd3da79d-116b-48c5-bf0a-82e96c169e21\") " pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.316735 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fd7cb74df-c9w9d"] Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.382416 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.488478 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-676657567-chvpg"] Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.489347 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.492229 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.494803 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.501190 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-676657567-chvpg"] Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.582521 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/0245196f-6ba1-4767-86b7-b86872cbfff3-monitoring-plugin-cert\") pod \"monitoring-plugin-676657567-chvpg\" (UID: \"0245196f-6ba1-4767-86b7-b86872cbfff3\") " pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.685156 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/0245196f-6ba1-4767-86b7-b86872cbfff3-monitoring-plugin-cert\") pod \"monitoring-plugin-676657567-chvpg\" (UID: \"0245196f-6ba1-4767-86b7-b86872cbfff3\") " pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.692077 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/0245196f-6ba1-4767-86b7-b86872cbfff3-monitoring-plugin-cert\") pod \"monitoring-plugin-676657567-chvpg\" (UID: \"0245196f-6ba1-4767-86b7-b86872cbfff3\") " pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" Dec 09 17:03:24 crc kubenswrapper[4853]: I1209 17:03:24.835844 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.088888 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.092062 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.100446 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.102638 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.102862 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-rjkcq" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.102874 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.103048 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.103462 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.104065 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.104138 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.104295 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.104325 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.106055 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-q3m92cnttod9" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.115373 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.115806 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.128578 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.194653 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c0881850-0cd6-4e89-8691-ee2f7965bc72-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.194707 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.194730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c0881850-0cd6-4e89-8691-ee2f7965bc72-config-out\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.194779 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2k7j\" (UniqueName: \"kubernetes.io/projected/c0881850-0cd6-4e89-8691-ee2f7965bc72-kube-api-access-d2k7j\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.194894 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-web-config\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.194955 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195007 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195090 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195137 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195175 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-config\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195443 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195479 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195496 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195521 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195562 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c0881850-0cd6-4e89-8691-ee2f7965bc72-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195585 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195612 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.195634 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298139 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298223 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298257 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298293 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298345 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c0881850-0cd6-4e89-8691-ee2f7965bc72-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298376 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298397 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298434 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298471 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c0881850-0cd6-4e89-8691-ee2f7965bc72-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298505 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298541 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c0881850-0cd6-4e89-8691-ee2f7965bc72-config-out\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2k7j\" (UniqueName: \"kubernetes.io/projected/c0881850-0cd6-4e89-8691-ee2f7965bc72-kube-api-access-d2k7j\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298675 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-web-config\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298710 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298754 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298844 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298927 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.298976 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-config\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.299546 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c0881850-0cd6-4e89-8691-ee2f7965bc72-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.300681 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.300739 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.301433 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.302771 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.302793 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.303444 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-config\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.303665 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.303979 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c0881850-0cd6-4e89-8691-ee2f7965bc72-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.304298 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.304858 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-web-config\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.309471 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.309668 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.309685 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.309713 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c0881850-0cd6-4e89-8691-ee2f7965bc72-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.318394 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c0881850-0cd6-4e89-8691-ee2f7965bc72-config-out\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.318591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c0881850-0cd6-4e89-8691-ee2f7965bc72-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.322614 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2k7j\" (UniqueName: \"kubernetes.io/projected/c0881850-0cd6-4e89-8691-ee2f7965bc72-kube-api-access-d2k7j\") pod \"prometheus-k8s-0\" (UID: \"c0881850-0cd6-4e89-8691-ee2f7965bc72\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.410059 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:25 crc kubenswrapper[4853]: W1209 17:03:25.456911 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdeaefa7_0304_4062_9d4e_d2a2e0a15739.slice/crio-c47df9daa44ebc5b55745167484c76795a398a3a8c7566b01739bdae1aa64adc WatchSource:0}: Error finding container c47df9daa44ebc5b55745167484c76795a398a3a8c7566b01739bdae1aa64adc: Status 404 returned error can't find the container with id c47df9daa44ebc5b55745167484c76795a398a3a8c7566b01739bdae1aa64adc Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.914644 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fd7cb74df-c9w9d" event={"ID":"fdeaefa7-0304-4062-9d4e-d2a2e0a15739","Type":"ContainerStarted","Data":"d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101"} Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.915182 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fd7cb74df-c9w9d" event={"ID":"fdeaefa7-0304-4062-9d4e-d2a2e0a15739","Type":"ContainerStarted","Data":"c47df9daa44ebc5b55745167484c76795a398a3a8c7566b01739bdae1aa64adc"} Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.916875 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" event={"ID":"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8","Type":"ContainerStarted","Data":"2a5cf6ccc1a9f8f0cb4b0f51e516039edd84a61c67856b6859de7ddfaa552d38"} Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.916964 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" event={"ID":"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8","Type":"ContainerStarted","Data":"a5055f19511e695264a111da9970c086a747a939963c6130e7f901bc4d4192a4"} Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.945659 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fd7cb74df-c9w9d" podStartSLOduration=2.945627932 podStartE2EDuration="2.945627932s" podCreationTimestamp="2025-12-09 17:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:03:25.944518972 +0000 UTC m=+432.879258154" watchObservedRunningTime="2025-12-09 17:03:25.945627932 +0000 UTC m=+432.880367114" Dec 09 17:03:25 crc kubenswrapper[4853]: I1209 17:03:25.966448 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-676657567-chvpg"] Dec 09 17:03:25 crc kubenswrapper[4853]: W1209 17:03:25.968588 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0245196f_6ba1_4767_86b7_b86872cbfff3.slice/crio-c0f691c63d32b904613ae1d07937bf08856ecb705b53fdf4f5470ba0cb4f0a4f WatchSource:0}: Error finding container c0f691c63d32b904613ae1d07937bf08856ecb705b53fdf4f5470ba0cb4f0a4f: Status 404 returned error can't find the container with id c0f691c63d32b904613ae1d07937bf08856ecb705b53fdf4f5470ba0cb4f0a4f Dec 09 17:03:26 crc kubenswrapper[4853]: I1209 17:03:26.011124 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-776466597f-vpxr8"] Dec 09 17:03:26 crc kubenswrapper[4853]: W1209 17:03:26.015132 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd3da79d_116b_48c5_bf0a_82e96c169e21.slice/crio-82b3a6164b23d46c148fb219f87924bf8227b4bef7e200b1f5e94852376e1c13 WatchSource:0}: Error finding container 82b3a6164b23d46c148fb219f87924bf8227b4bef7e200b1f5e94852376e1c13: Status 404 returned error can't find the container with id 82b3a6164b23d46c148fb219f87924bf8227b4bef7e200b1f5e94852376e1c13 Dec 09 17:03:26 crc kubenswrapper[4853]: I1209 17:03:26.095590 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 09 17:03:26 crc kubenswrapper[4853]: I1209 17:03:26.924182 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" event={"ID":"0245196f-6ba1-4767-86b7-b86872cbfff3","Type":"ContainerStarted","Data":"c0f691c63d32b904613ae1d07937bf08856ecb705b53fdf4f5470ba0cb4f0a4f"} Dec 09 17:03:26 crc kubenswrapper[4853]: I1209 17:03:26.926423 4853 generic.go:334] "Generic (PLEG): container finished" podID="c0881850-0cd6-4e89-8691-ee2f7965bc72" containerID="9a45e94553bfbe59947bada49e8e6734e40e104c0e8d5a70122297fcd17846d0" exitCode=0 Dec 09 17:03:26 crc kubenswrapper[4853]: I1209 17:03:26.926505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c0881850-0cd6-4e89-8691-ee2f7965bc72","Type":"ContainerDied","Data":"9a45e94553bfbe59947bada49e8e6734e40e104c0e8d5a70122297fcd17846d0"} Dec 09 17:03:26 crc kubenswrapper[4853]: I1209 17:03:26.926538 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c0881850-0cd6-4e89-8691-ee2f7965bc72","Type":"ContainerStarted","Data":"2c8533783bb7f696cbfc6f3009ff659467e90652022437f39542c1a6c8d183fd"} Dec 09 17:03:26 crc kubenswrapper[4853]: I1209 17:03:26.928188 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-776466597f-vpxr8" event={"ID":"fd3da79d-116b-48c5-bf0a-82e96c169e21","Type":"ContainerStarted","Data":"82b3a6164b23d46c148fb219f87924bf8227b4bef7e200b1f5e94852376e1c13"} Dec 09 17:03:26 crc kubenswrapper[4853]: I1209 17:03:26.935774 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" event={"ID":"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8","Type":"ContainerStarted","Data":"dded2cc94a0c2b976196df7c3885f957571767d977e7d1ae8e51ab69014eeeb4"} Dec 09 17:03:27 crc kubenswrapper[4853]: I1209 17:03:27.947522 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"edf4edcd-1366-40ca-8127-20d4efcc9b0c","Type":"ContainerStarted","Data":"15efeaeecc199d82a60cfd8418d5e89fe1a8ae0d24da178a1a918ddb81f54859"} Dec 09 17:03:27 crc kubenswrapper[4853]: I1209 17:03:27.948351 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"edf4edcd-1366-40ca-8127-20d4efcc9b0c","Type":"ContainerStarted","Data":"35e14f55a4b93be441451d4a34818dd1f38f6709c54a2e8003a7684a760a9dc4"} Dec 09 17:03:27 crc kubenswrapper[4853]: I1209 17:03:27.948366 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"edf4edcd-1366-40ca-8127-20d4efcc9b0c","Type":"ContainerStarted","Data":"07ff6a3bb6f86645dbd1cb54fd394a8da511f1089b4f6898af325153a683bca6"} Dec 09 17:03:27 crc kubenswrapper[4853]: I1209 17:03:27.948376 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"edf4edcd-1366-40ca-8127-20d4efcc9b0c","Type":"ContainerStarted","Data":"ff3abb209f42abc635726dc5a84baa6918da96794f36d878ef495223dafe6cf0"} Dec 09 17:03:28 crc kubenswrapper[4853]: I1209 17:03:28.593531 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:03:28 crc kubenswrapper[4853]: I1209 17:03:28.593690 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:03:28 crc kubenswrapper[4853]: I1209 17:03:28.593769 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:03:28 crc kubenswrapper[4853]: I1209 17:03:28.594941 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9090375773fe6a3dc05961f24a074b45b6f59cd6d5e586b7e14cdea2d22dac4"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:03:28 crc kubenswrapper[4853]: I1209 17:03:28.595032 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://e9090375773fe6a3dc05961f24a074b45b6f59cd6d5e586b7e14cdea2d22dac4" gracePeriod=600 Dec 09 17:03:28 crc kubenswrapper[4853]: I1209 17:03:28.954448 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-776466597f-vpxr8" event={"ID":"fd3da79d-116b-48c5-bf0a-82e96c169e21","Type":"ContainerStarted","Data":"d0c3f32d1671a806102f54d795dba4d2c8d8f436f687fb193e207b716448d63a"} Dec 09 17:03:28 crc kubenswrapper[4853]: I1209 17:03:28.966934 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"edf4edcd-1366-40ca-8127-20d4efcc9b0c","Type":"ContainerStarted","Data":"c741effabb5d0b200b2840e64f377c8875b85fcc0b1a8adae23eaf9c5f6ccf47"} Dec 09 17:03:28 crc kubenswrapper[4853]: I1209 17:03:28.984745 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-776466597f-vpxr8" podStartSLOduration=2.861953362 podStartE2EDuration="4.984711083s" podCreationTimestamp="2025-12-09 17:03:24 +0000 UTC" firstStartedPulling="2025-12-09 17:03:26.020544289 +0000 UTC m=+432.955283461" lastFinishedPulling="2025-12-09 17:03:28.143302 +0000 UTC m=+435.078041182" observedRunningTime="2025-12-09 17:03:28.978156537 +0000 UTC m=+435.912895719" watchObservedRunningTime="2025-12-09 17:03:28.984711083 +0000 UTC m=+435.919450265" Dec 09 17:03:29 crc kubenswrapper[4853]: E1209 17:03:29.735683 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e036ba1_c8bd_48d7_bd93_71993300b60f.slice/crio-conmon-e9090375773fe6a3dc05961f24a074b45b6f59cd6d5e586b7e14cdea2d22dac4.scope\": RecentStats: unable to find data in memory cache]" Dec 09 17:03:29 crc kubenswrapper[4853]: I1209 17:03:29.992177 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="e9090375773fe6a3dc05961f24a074b45b6f59cd6d5e586b7e14cdea2d22dac4" exitCode=0 Dec 09 17:03:29 crc kubenswrapper[4853]: I1209 17:03:29.992246 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"e9090375773fe6a3dc05961f24a074b45b6f59cd6d5e586b7e14cdea2d22dac4"} Dec 09 17:03:29 crc kubenswrapper[4853]: I1209 17:03:29.992295 4853 scope.go:117] "RemoveContainer" containerID="c9d9f59af3db78337db77f066a5fa7b9a0ba5b193b1ef32b99b893cbedc91ed6" Dec 09 17:03:33 crc kubenswrapper[4853]: I1209 17:03:33.838126 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:33 crc kubenswrapper[4853]: I1209 17:03:33.839336 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:33 crc kubenswrapper[4853]: I1209 17:03:33.847567 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.061158 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"edf4edcd-1366-40ca-8127-20d4efcc9b0c","Type":"ContainerStarted","Data":"b1d1b35946d0329e25d2e111ca17a66dff4fa395ceed0b7c1c500394f98d54c0"} Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.071750 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" event={"ID":"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8","Type":"ContainerStarted","Data":"3d6f28680f7f9e7acbc84bdadf84646939ac0cc0b41a242c41bdff94b25a7fae"} Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.071825 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" event={"ID":"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8","Type":"ContainerStarted","Data":"86739f059cd6fbff09717bf4462b6ae9c3208832799226ef23c0295f5b6340c3"} Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.071843 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" event={"ID":"64f61264-80ba-4c5f-9f0c-2b5eb65b89d8","Type":"ContainerStarted","Data":"37752db0e015e250582daffce1d2ef034b101919bcbed3f440545dd4bc370da4"} Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.072024 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.073571 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" event={"ID":"0245196f-6ba1-4767-86b7-b86872cbfff3","Type":"ContainerStarted","Data":"43249583da35a2ec3dc532d8e07656f93e503083827fd6bf331c93056dddf89c"} Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.076461 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"6c6b3bde6cd549f0e31cb60caaff7dda9f88378c80c10391725bd8667fec2086"} Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.080481 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c0881850-0cd6-4e89-8691-ee2f7965bc72","Type":"ContainerStarted","Data":"f183883d320b3a21e36f992002b3be1d330d277281eca7eed5b10f706616260c"} Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.080509 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c0881850-0cd6-4e89-8691-ee2f7965bc72","Type":"ContainerStarted","Data":"475a1bc9e3aae83833868b91613a154c9d2be0ddba4ad6c7eb83ea4e399d3dcc"} Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.080519 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c0881850-0cd6-4e89-8691-ee2f7965bc72","Type":"ContainerStarted","Data":"5ef30e1f6c5f9e74d4df27cbbdc6c7665c00b0db351a760cce34e5860d0fc74b"} Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.087796 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.089145 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.099586 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.547923934 podStartE2EDuration="15.099557681s" podCreationTimestamp="2025-12-09 17:03:19 +0000 UTC" firstStartedPulling="2025-12-09 17:03:20.969030447 +0000 UTC m=+427.903769629" lastFinishedPulling="2025-12-09 17:03:33.520664184 +0000 UTC m=+440.455403376" observedRunningTime="2025-12-09 17:03:34.092289844 +0000 UTC m=+441.027029026" watchObservedRunningTime="2025-12-09 17:03:34.099557681 +0000 UTC m=+441.034296863" Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.197943 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" podStartSLOduration=3.261375306 podStartE2EDuration="10.197872557s" podCreationTimestamp="2025-12-09 17:03:24 +0000 UTC" firstStartedPulling="2025-12-09 17:03:25.970902852 +0000 UTC m=+432.905642034" lastFinishedPulling="2025-12-09 17:03:32.907400103 +0000 UTC m=+439.842139285" observedRunningTime="2025-12-09 17:03:34.175901086 +0000 UTC m=+441.110640278" watchObservedRunningTime="2025-12-09 17:03:34.197872557 +0000 UTC m=+441.132611769" Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.230422 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xp79b"] Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.254289 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-9ddd48f4-7fb22" podStartSLOduration=3.391041595 podStartE2EDuration="14.254264695s" podCreationTimestamp="2025-12-09 17:03:20 +0000 UTC" firstStartedPulling="2025-12-09 17:03:22.010138827 +0000 UTC m=+428.944878009" lastFinishedPulling="2025-12-09 17:03:32.873361927 +0000 UTC m=+439.808101109" observedRunningTime="2025-12-09 17:03:34.235797509 +0000 UTC m=+441.170536701" watchObservedRunningTime="2025-12-09 17:03:34.254264695 +0000 UTC m=+441.189003877" Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.836864 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" Dec 09 17:03:34 crc kubenswrapper[4853]: I1209 17:03:34.845351 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-676657567-chvpg" Dec 09 17:03:35 crc kubenswrapper[4853]: I1209 17:03:35.091140 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c0881850-0cd6-4e89-8691-ee2f7965bc72","Type":"ContainerStarted","Data":"92c493abbdb950d14e13ace761b458c209a5f2c65859f7ffbacf041b7fd8c014"} Dec 09 17:03:35 crc kubenswrapper[4853]: I1209 17:03:35.091224 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c0881850-0cd6-4e89-8691-ee2f7965bc72","Type":"ContainerStarted","Data":"d0f65a0e2447577e7ecb831d84a0fdd6188f9cf7a2e8dd01b6f78d46d4b4fa97"} Dec 09 17:03:35 crc kubenswrapper[4853]: I1209 17:03:35.091238 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c0881850-0cd6-4e89-8691-ee2f7965bc72","Type":"ContainerStarted","Data":"17fdd0bbc811973599d34e7f35c7e1c9378b4e7839dc2060aca1c8bd0bd74b36"} Dec 09 17:03:35 crc kubenswrapper[4853]: I1209 17:03:35.125428 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.546419732 podStartE2EDuration="10.125410279s" podCreationTimestamp="2025-12-09 17:03:25 +0000 UTC" firstStartedPulling="2025-12-09 17:03:26.928476853 +0000 UTC m=+433.863216045" lastFinishedPulling="2025-12-09 17:03:33.50746739 +0000 UTC m=+440.442206592" observedRunningTime="2025-12-09 17:03:35.123626602 +0000 UTC m=+442.058365794" watchObservedRunningTime="2025-12-09 17:03:35.125410279 +0000 UTC m=+442.060149461" Dec 09 17:03:35 crc kubenswrapper[4853]: I1209 17:03:35.411983 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:03:41 crc kubenswrapper[4853]: I1209 17:03:41.064895 4853 patch_prober.go:28] interesting pod/router-default-5444994796-ffzns container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 17:03:41 crc kubenswrapper[4853]: I1209 17:03:41.065378 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-ffzns" podUID="d8892469-e13f-4dcf-ab96-106be91ab901" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 17:03:44 crc kubenswrapper[4853]: I1209 17:03:44.382993 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:44 crc kubenswrapper[4853]: I1209 17:03:44.383508 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.301763 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-xp79b" podUID="71b908be-495e-4eb2-8429-56c89e4344f4" containerName="console" containerID="cri-o://b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a" gracePeriod=15 Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.447815 4853 patch_prober.go:28] interesting pod/console-f9d7485db-xp79b container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.448249 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-xp79b" podUID="71b908be-495e-4eb2-8429-56c89e4344f4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.661747 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xp79b_71b908be-495e-4eb2-8429-56c89e4344f4/console/0.log" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.661815 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xp79b" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.834797 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4q6k\" (UniqueName: \"kubernetes.io/projected/71b908be-495e-4eb2-8429-56c89e4344f4-kube-api-access-c4q6k\") pod \"71b908be-495e-4eb2-8429-56c89e4344f4\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.834879 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-oauth-serving-cert\") pod \"71b908be-495e-4eb2-8429-56c89e4344f4\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.834915 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-trusted-ca-bundle\") pod \"71b908be-495e-4eb2-8429-56c89e4344f4\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.834951 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-serving-cert\") pod \"71b908be-495e-4eb2-8429-56c89e4344f4\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.834978 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-console-config\") pod \"71b908be-495e-4eb2-8429-56c89e4344f4\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.835000 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-oauth-config\") pod \"71b908be-495e-4eb2-8429-56c89e4344f4\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.835055 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-service-ca\") pod \"71b908be-495e-4eb2-8429-56c89e4344f4\" (UID: \"71b908be-495e-4eb2-8429-56c89e4344f4\") " Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.835654 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "71b908be-495e-4eb2-8429-56c89e4344f4" (UID: "71b908be-495e-4eb2-8429-56c89e4344f4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.835818 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-service-ca" (OuterVolumeSpecName: "service-ca") pod "71b908be-495e-4eb2-8429-56c89e4344f4" (UID: "71b908be-495e-4eb2-8429-56c89e4344f4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.836310 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-console-config" (OuterVolumeSpecName: "console-config") pod "71b908be-495e-4eb2-8429-56c89e4344f4" (UID: "71b908be-495e-4eb2-8429-56c89e4344f4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.836423 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "71b908be-495e-4eb2-8429-56c89e4344f4" (UID: "71b908be-495e-4eb2-8429-56c89e4344f4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.845821 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b908be-495e-4eb2-8429-56c89e4344f4-kube-api-access-c4q6k" (OuterVolumeSpecName: "kube-api-access-c4q6k") pod "71b908be-495e-4eb2-8429-56c89e4344f4" (UID: "71b908be-495e-4eb2-8429-56c89e4344f4"). InnerVolumeSpecName "kube-api-access-c4q6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.848630 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "71b908be-495e-4eb2-8429-56c89e4344f4" (UID: "71b908be-495e-4eb2-8429-56c89e4344f4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.849091 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "71b908be-495e-4eb2-8429-56c89e4344f4" (UID: "71b908be-495e-4eb2-8429-56c89e4344f4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.936092 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4q6k\" (UniqueName: \"kubernetes.io/projected/71b908be-495e-4eb2-8429-56c89e4344f4-kube-api-access-c4q6k\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.936128 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.936139 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.936150 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.936158 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.936168 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71b908be-495e-4eb2-8429-56c89e4344f4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:03:59 crc kubenswrapper[4853]: I1209 17:03:59.936176 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71b908be-495e-4eb2-8429-56c89e4344f4-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.266286 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xp79b_71b908be-495e-4eb2-8429-56c89e4344f4/console/0.log" Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.266339 4853 generic.go:334] "Generic (PLEG): container finished" podID="71b908be-495e-4eb2-8429-56c89e4344f4" containerID="b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a" exitCode=2 Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.266368 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xp79b" event={"ID":"71b908be-495e-4eb2-8429-56c89e4344f4","Type":"ContainerDied","Data":"b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a"} Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.266396 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xp79b" event={"ID":"71b908be-495e-4eb2-8429-56c89e4344f4","Type":"ContainerDied","Data":"8df69ff4da87d20a0c42064f17ffe81c18061ff55a09e33b6db972b38e924f41"} Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.266414 4853 scope.go:117] "RemoveContainer" containerID="b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a" Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.266535 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xp79b" Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.287073 4853 scope.go:117] "RemoveContainer" containerID="b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a" Dec 09 17:04:00 crc kubenswrapper[4853]: E1209 17:04:00.287487 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a\": container with ID starting with b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a not found: ID does not exist" containerID="b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a" Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.287562 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a"} err="failed to get container status \"b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a\": rpc error: code = NotFound desc = could not find container \"b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a\": container with ID starting with b47d5d16dc2fa0bb1ed4f51fe7ddc6c059d6d67a92f7e4905e992c5499966c1a not found: ID does not exist" Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.295562 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xp79b"] Dec 09 17:04:00 crc kubenswrapper[4853]: I1209 17:04:00.299087 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-xp79b"] Dec 09 17:04:01 crc kubenswrapper[4853]: I1209 17:04:01.579018 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b908be-495e-4eb2-8429-56c89e4344f4" path="/var/lib/kubelet/pods/71b908be-495e-4eb2-8429-56c89e4344f4/volumes" Dec 09 17:04:04 crc kubenswrapper[4853]: I1209 17:04:04.388663 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:04:04 crc kubenswrapper[4853]: I1209 17:04:04.393562 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-776466597f-vpxr8" Dec 09 17:04:25 crc kubenswrapper[4853]: I1209 17:04:25.412008 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:04:25 crc kubenswrapper[4853]: I1209 17:04:25.451037 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:04:25 crc kubenswrapper[4853]: I1209 17:04:25.500019 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.222141 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6bfcc5b469-tzkw9"] Dec 09 17:04:59 crc kubenswrapper[4853]: E1209 17:04:59.222994 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b908be-495e-4eb2-8429-56c89e4344f4" containerName="console" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.223015 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b908be-495e-4eb2-8429-56c89e4344f4" containerName="console" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.223210 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b908be-495e-4eb2-8429-56c89e4344f4" containerName="console" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.223773 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.238649 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bfcc5b469-tzkw9"] Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.392570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgl87\" (UniqueName: \"kubernetes.io/projected/e705ec5e-fd22-4050-be73-f79ec4c45320-kube-api-access-zgl87\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.392656 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-oauth-config\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.392680 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-trusted-ca-bundle\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.392891 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-oauth-serving-cert\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.393026 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-console-config\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.393178 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-serving-cert\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.393322 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-service-ca\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.494297 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgl87\" (UniqueName: \"kubernetes.io/projected/e705ec5e-fd22-4050-be73-f79ec4c45320-kube-api-access-zgl87\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.494354 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-oauth-config\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.494384 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-trusted-ca-bundle\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.494437 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-oauth-serving-cert\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.494468 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-console-config\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.494490 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-serving-cert\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.494509 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-service-ca\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.495510 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-service-ca\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.495639 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-console-config\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.495948 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-oauth-serving-cert\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.496951 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-trusted-ca-bundle\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.501229 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-serving-cert\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.509277 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-oauth-config\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.527397 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgl87\" (UniqueName: \"kubernetes.io/projected/e705ec5e-fd22-4050-be73-f79ec4c45320-kube-api-access-zgl87\") pod \"console-6bfcc5b469-tzkw9\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:04:59 crc kubenswrapper[4853]: I1209 17:04:59.585098 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:05:00 crc kubenswrapper[4853]: I1209 17:05:00.073818 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bfcc5b469-tzkw9"] Dec 09 17:05:00 crc kubenswrapper[4853]: I1209 17:05:00.680548 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bfcc5b469-tzkw9" event={"ID":"e705ec5e-fd22-4050-be73-f79ec4c45320","Type":"ContainerStarted","Data":"59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404"} Dec 09 17:05:00 crc kubenswrapper[4853]: I1209 17:05:00.680899 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bfcc5b469-tzkw9" event={"ID":"e705ec5e-fd22-4050-be73-f79ec4c45320","Type":"ContainerStarted","Data":"d0b82a5e731565551076495376755e1bd17055d6c2de4d02104003becef22c4c"} Dec 09 17:05:00 crc kubenswrapper[4853]: I1209 17:05:00.703894 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6bfcc5b469-tzkw9" podStartSLOduration=1.703873135 podStartE2EDuration="1.703873135s" podCreationTimestamp="2025-12-09 17:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:05:00.697636138 +0000 UTC m=+527.632375330" watchObservedRunningTime="2025-12-09 17:05:00.703873135 +0000 UTC m=+527.638612317" Dec 09 17:05:09 crc kubenswrapper[4853]: I1209 17:05:09.613095 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:05:09 crc kubenswrapper[4853]: I1209 17:05:09.613733 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:05:09 crc kubenswrapper[4853]: I1209 17:05:09.620013 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:05:09 crc kubenswrapper[4853]: I1209 17:05:09.738433 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:05:09 crc kubenswrapper[4853]: I1209 17:05:09.793521 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fd7cb74df-c9w9d"] Dec 09 17:05:34 crc kubenswrapper[4853]: I1209 17:05:34.836912 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-fd7cb74df-c9w9d" podUID="fdeaefa7-0304-4062-9d4e-d2a2e0a15739" containerName="console" containerID="cri-o://d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101" gracePeriod=15 Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.173928 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fd7cb74df-c9w9d_fdeaefa7-0304-4062-9d4e-d2a2e0a15739/console/0.log" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.173993 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.316394 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-oauth-serving-cert\") pod \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.316450 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-oauth-config\") pod \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.316486 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-serving-cert\") pod \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.316534 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-trusted-ca-bundle\") pod \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.316577 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrw6r\" (UniqueName: \"kubernetes.io/projected/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-kube-api-access-jrw6r\") pod \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.316710 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-service-ca\") pod \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.316774 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-config\") pod \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\" (UID: \"fdeaefa7-0304-4062-9d4e-d2a2e0a15739\") " Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.317499 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-service-ca" (OuterVolumeSpecName: "service-ca") pod "fdeaefa7-0304-4062-9d4e-d2a2e0a15739" (UID: "fdeaefa7-0304-4062-9d4e-d2a2e0a15739"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.317520 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fdeaefa7-0304-4062-9d4e-d2a2e0a15739" (UID: "fdeaefa7-0304-4062-9d4e-d2a2e0a15739"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.317561 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-config" (OuterVolumeSpecName: "console-config") pod "fdeaefa7-0304-4062-9d4e-d2a2e0a15739" (UID: "fdeaefa7-0304-4062-9d4e-d2a2e0a15739"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.317981 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fdeaefa7-0304-4062-9d4e-d2a2e0a15739" (UID: "fdeaefa7-0304-4062-9d4e-d2a2e0a15739"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.322253 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fdeaefa7-0304-4062-9d4e-d2a2e0a15739" (UID: "fdeaefa7-0304-4062-9d4e-d2a2e0a15739"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.322325 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fdeaefa7-0304-4062-9d4e-d2a2e0a15739" (UID: "fdeaefa7-0304-4062-9d4e-d2a2e0a15739"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.322587 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-kube-api-access-jrw6r" (OuterVolumeSpecName: "kube-api-access-jrw6r") pod "fdeaefa7-0304-4062-9d4e-d2a2e0a15739" (UID: "fdeaefa7-0304-4062-9d4e-d2a2e0a15739"). InnerVolumeSpecName "kube-api-access-jrw6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.419384 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.419707 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.419729 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.419747 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.419764 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.419780 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.419796 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrw6r\" (UniqueName: \"kubernetes.io/projected/fdeaefa7-0304-4062-9d4e-d2a2e0a15739-kube-api-access-jrw6r\") on node \"crc\" DevicePath \"\"" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.902458 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fd7cb74df-c9w9d_fdeaefa7-0304-4062-9d4e-d2a2e0a15739/console/0.log" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.902509 4853 generic.go:334] "Generic (PLEG): container finished" podID="fdeaefa7-0304-4062-9d4e-d2a2e0a15739" containerID="d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101" exitCode=2 Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.902537 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fd7cb74df-c9w9d" event={"ID":"fdeaefa7-0304-4062-9d4e-d2a2e0a15739","Type":"ContainerDied","Data":"d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101"} Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.902561 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fd7cb74df-c9w9d" event={"ID":"fdeaefa7-0304-4062-9d4e-d2a2e0a15739","Type":"ContainerDied","Data":"c47df9daa44ebc5b55745167484c76795a398a3a8c7566b01739bdae1aa64adc"} Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.902580 4853 scope.go:117] "RemoveContainer" containerID="d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.902692 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd7cb74df-c9w9d" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.935127 4853 scope.go:117] "RemoveContainer" containerID="d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101" Dec 09 17:05:35 crc kubenswrapper[4853]: E1209 17:05:35.936446 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101\": container with ID starting with d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101 not found: ID does not exist" containerID="d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.936500 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101"} err="failed to get container status \"d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101\": rpc error: code = NotFound desc = could not find container \"d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101\": container with ID starting with d917e038b6d5214093aaec6baa68ce15303dce6181b2e06e803874d8d9cf2101 not found: ID does not exist" Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.936826 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fd7cb74df-c9w9d"] Dec 09 17:05:35 crc kubenswrapper[4853]: I1209 17:05:35.942568 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-fd7cb74df-c9w9d"] Dec 09 17:05:37 crc kubenswrapper[4853]: I1209 17:05:37.580114 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdeaefa7-0304-4062-9d4e-d2a2e0a15739" path="/var/lib/kubelet/pods/fdeaefa7-0304-4062-9d4e-d2a2e0a15739/volumes" Dec 09 17:05:58 crc kubenswrapper[4853]: I1209 17:05:58.592714 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:05:58 crc kubenswrapper[4853]: I1209 17:05:58.593316 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:06:28 crc kubenswrapper[4853]: I1209 17:06:28.593492 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:06:28 crc kubenswrapper[4853]: I1209 17:06:28.594204 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:06:58 crc kubenswrapper[4853]: I1209 17:06:58.593156 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:06:58 crc kubenswrapper[4853]: I1209 17:06:58.593885 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:06:58 crc kubenswrapper[4853]: I1209 17:06:58.593957 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:06:58 crc kubenswrapper[4853]: I1209 17:06:58.596137 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c6b3bde6cd549f0e31cb60caaff7dda9f88378c80c10391725bd8667fec2086"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:06:58 crc kubenswrapper[4853]: I1209 17:06:58.596421 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://6c6b3bde6cd549f0e31cb60caaff7dda9f88378c80c10391725bd8667fec2086" gracePeriod=600 Dec 09 17:06:59 crc kubenswrapper[4853]: I1209 17:06:59.466545 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="6c6b3bde6cd549f0e31cb60caaff7dda9f88378c80c10391725bd8667fec2086" exitCode=0 Dec 09 17:06:59 crc kubenswrapper[4853]: I1209 17:06:59.466629 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"6c6b3bde6cd549f0e31cb60caaff7dda9f88378c80c10391725bd8667fec2086"} Dec 09 17:06:59 crc kubenswrapper[4853]: I1209 17:06:59.467139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"8a118dfb72dd6cb3a09eba31a0c1c9fb6b48c60ceaaef13411d332f6c915d49b"} Dec 09 17:06:59 crc kubenswrapper[4853]: I1209 17:06:59.467170 4853 scope.go:117] "RemoveContainer" containerID="e9090375773fe6a3dc05961f24a074b45b6f59cd6d5e586b7e14cdea2d22dac4" Dec 09 17:08:56 crc kubenswrapper[4853]: I1209 17:08:56.231344 4853 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 17:09:05 crc kubenswrapper[4853]: I1209 17:09:05.877558 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z"] Dec 09 17:09:05 crc kubenswrapper[4853]: E1209 17:09:05.878506 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdeaefa7-0304-4062-9d4e-d2a2e0a15739" containerName="console" Dec 09 17:09:05 crc kubenswrapper[4853]: I1209 17:09:05.878521 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdeaefa7-0304-4062-9d4e-d2a2e0a15739" containerName="console" Dec 09 17:09:05 crc kubenswrapper[4853]: I1209 17:09:05.878664 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdeaefa7-0304-4062-9d4e-d2a2e0a15739" containerName="console" Dec 09 17:09:05 crc kubenswrapper[4853]: I1209 17:09:05.879808 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:05 crc kubenswrapper[4853]: I1209 17:09:05.883113 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 17:09:05 crc kubenswrapper[4853]: I1209 17:09:05.889639 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z"] Dec 09 17:09:05 crc kubenswrapper[4853]: I1209 17:09:05.986572 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:05 crc kubenswrapper[4853]: I1209 17:09:05.986700 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:05 crc kubenswrapper[4853]: I1209 17:09:05.986746 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjrvf\" (UniqueName: \"kubernetes.io/projected/174b41a8-0a58-43f6-b6cc-03f8864597e5-kube-api-access-zjrvf\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:06 crc kubenswrapper[4853]: I1209 17:09:06.088565 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:06 crc kubenswrapper[4853]: I1209 17:09:06.088689 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjrvf\" (UniqueName: \"kubernetes.io/projected/174b41a8-0a58-43f6-b6cc-03f8864597e5-kube-api-access-zjrvf\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:06 crc kubenswrapper[4853]: I1209 17:09:06.088828 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:06 crc kubenswrapper[4853]: I1209 17:09:06.089385 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:06 crc kubenswrapper[4853]: I1209 17:09:06.089774 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:06 crc kubenswrapper[4853]: I1209 17:09:06.118306 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjrvf\" (UniqueName: \"kubernetes.io/projected/174b41a8-0a58-43f6-b6cc-03f8864597e5-kube-api-access-zjrvf\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:06 crc kubenswrapper[4853]: I1209 17:09:06.200719 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:06 crc kubenswrapper[4853]: I1209 17:09:06.452657 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z"] Dec 09 17:09:07 crc kubenswrapper[4853]: I1209 17:09:07.392457 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" event={"ID":"174b41a8-0a58-43f6-b6cc-03f8864597e5","Type":"ContainerStarted","Data":"d327969d0b6d85a40b569177916d4ab82dead92f0ef9da527c22372e48f11d20"} Dec 09 17:09:07 crc kubenswrapper[4853]: I1209 17:09:07.392536 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" event={"ID":"174b41a8-0a58-43f6-b6cc-03f8864597e5","Type":"ContainerStarted","Data":"08b71fe16c9065bc6c9d9323d08c88c845e5fdb73b81b19c6c30c31b264d99d2"} Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.203404 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z966d"] Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.205097 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.217731 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z966d"] Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.319675 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-utilities\") pod \"redhat-operators-z966d\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.319735 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-catalog-content\") pod \"redhat-operators-z966d\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.319788 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllsh\" (UniqueName: \"kubernetes.io/projected/b472ecf1-3746-457c-9101-14f607d89e17-kube-api-access-wllsh\") pod \"redhat-operators-z966d\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.399033 4853 generic.go:334] "Generic (PLEG): container finished" podID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerID="d327969d0b6d85a40b569177916d4ab82dead92f0ef9da527c22372e48f11d20" exitCode=0 Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.399077 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" event={"ID":"174b41a8-0a58-43f6-b6cc-03f8864597e5","Type":"ContainerDied","Data":"d327969d0b6d85a40b569177916d4ab82dead92f0ef9da527c22372e48f11d20"} Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.401123 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.421528 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-utilities\") pod \"redhat-operators-z966d\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.421662 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-catalog-content\") pod \"redhat-operators-z966d\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.421791 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllsh\" (UniqueName: \"kubernetes.io/projected/b472ecf1-3746-457c-9101-14f607d89e17-kube-api-access-wllsh\") pod \"redhat-operators-z966d\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.422163 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-utilities\") pod \"redhat-operators-z966d\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.422238 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-catalog-content\") pod \"redhat-operators-z966d\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.447526 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllsh\" (UniqueName: \"kubernetes.io/projected/b472ecf1-3746-457c-9101-14f607d89e17-kube-api-access-wllsh\") pod \"redhat-operators-z966d\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.520279 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:08 crc kubenswrapper[4853]: I1209 17:09:08.705546 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z966d"] Dec 09 17:09:08 crc kubenswrapper[4853]: W1209 17:09:08.712374 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb472ecf1_3746_457c_9101_14f607d89e17.slice/crio-530f47111d543813d977e9d46d7a8cc03f04e0fb4e7a47181c35c12d69f512a2 WatchSource:0}: Error finding container 530f47111d543813d977e9d46d7a8cc03f04e0fb4e7a47181c35c12d69f512a2: Status 404 returned error can't find the container with id 530f47111d543813d977e9d46d7a8cc03f04e0fb4e7a47181c35c12d69f512a2 Dec 09 17:09:09 crc kubenswrapper[4853]: I1209 17:09:09.408093 4853 generic.go:334] "Generic (PLEG): container finished" podID="b472ecf1-3746-457c-9101-14f607d89e17" containerID="bd2e1c75083b97cbdf1c65f807901d45c3d6a52b00d833af8ec14be5b2507926" exitCode=0 Dec 09 17:09:09 crc kubenswrapper[4853]: I1209 17:09:09.408174 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z966d" event={"ID":"b472ecf1-3746-457c-9101-14f607d89e17","Type":"ContainerDied","Data":"bd2e1c75083b97cbdf1c65f807901d45c3d6a52b00d833af8ec14be5b2507926"} Dec 09 17:09:09 crc kubenswrapper[4853]: I1209 17:09:09.408434 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z966d" event={"ID":"b472ecf1-3746-457c-9101-14f607d89e17","Type":"ContainerStarted","Data":"530f47111d543813d977e9d46d7a8cc03f04e0fb4e7a47181c35c12d69f512a2"} Dec 09 17:09:10 crc kubenswrapper[4853]: I1209 17:09:10.418465 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z966d" event={"ID":"b472ecf1-3746-457c-9101-14f607d89e17","Type":"ContainerStarted","Data":"2a3fd739dee02efe8e08a51f7b7f62b992be5efdb2cf64c3658b31145a6cf73a"} Dec 09 17:09:10 crc kubenswrapper[4853]: I1209 17:09:10.420903 4853 generic.go:334] "Generic (PLEG): container finished" podID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerID="f3743373e8cddf6372d6a0a95f4f6e19eb00c86b2a13c00637e6d8944ce0e7fd" exitCode=0 Dec 09 17:09:10 crc kubenswrapper[4853]: I1209 17:09:10.420959 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" event={"ID":"174b41a8-0a58-43f6-b6cc-03f8864597e5","Type":"ContainerDied","Data":"f3743373e8cddf6372d6a0a95f4f6e19eb00c86b2a13c00637e6d8944ce0e7fd"} Dec 09 17:09:11 crc kubenswrapper[4853]: I1209 17:09:11.432803 4853 generic.go:334] "Generic (PLEG): container finished" podID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerID="6b1e6631a3ad57fa7152e243c5a8950a25aa2b3540c4b1b371efb01d7bf6684f" exitCode=0 Dec 09 17:09:11 crc kubenswrapper[4853]: I1209 17:09:11.433356 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" event={"ID":"174b41a8-0a58-43f6-b6cc-03f8864597e5","Type":"ContainerDied","Data":"6b1e6631a3ad57fa7152e243c5a8950a25aa2b3540c4b1b371efb01d7bf6684f"} Dec 09 17:09:11 crc kubenswrapper[4853]: I1209 17:09:11.435686 4853 generic.go:334] "Generic (PLEG): container finished" podID="b472ecf1-3746-457c-9101-14f607d89e17" containerID="2a3fd739dee02efe8e08a51f7b7f62b992be5efdb2cf64c3658b31145a6cf73a" exitCode=0 Dec 09 17:09:11 crc kubenswrapper[4853]: I1209 17:09:11.435749 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z966d" event={"ID":"b472ecf1-3746-457c-9101-14f607d89e17","Type":"ContainerDied","Data":"2a3fd739dee02efe8e08a51f7b7f62b992be5efdb2cf64c3658b31145a6cf73a"} Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.443027 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z966d" event={"ID":"b472ecf1-3746-457c-9101-14f607d89e17","Type":"ContainerStarted","Data":"4a6100ef2b127ceee3ede6b23f79601a913c832247199ea05d31145bf09bfcd2"} Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.464318 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z966d" podStartSLOduration=1.9559800859999998 podStartE2EDuration="4.464298544s" podCreationTimestamp="2025-12-09 17:09:08 +0000 UTC" firstStartedPulling="2025-12-09 17:09:09.409710617 +0000 UTC m=+776.344449799" lastFinishedPulling="2025-12-09 17:09:11.918029075 +0000 UTC m=+778.852768257" observedRunningTime="2025-12-09 17:09:12.457502613 +0000 UTC m=+779.392241805" watchObservedRunningTime="2025-12-09 17:09:12.464298544 +0000 UTC m=+779.399037726" Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.675079 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.695575 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-bundle\") pod \"174b41a8-0a58-43f6-b6cc-03f8864597e5\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.695690 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-util\") pod \"174b41a8-0a58-43f6-b6cc-03f8864597e5\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.695767 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjrvf\" (UniqueName: \"kubernetes.io/projected/174b41a8-0a58-43f6-b6cc-03f8864597e5-kube-api-access-zjrvf\") pod \"174b41a8-0a58-43f6-b6cc-03f8864597e5\" (UID: \"174b41a8-0a58-43f6-b6cc-03f8864597e5\") " Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.698487 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-bundle" (OuterVolumeSpecName: "bundle") pod "174b41a8-0a58-43f6-b6cc-03f8864597e5" (UID: "174b41a8-0a58-43f6-b6cc-03f8864597e5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.704804 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174b41a8-0a58-43f6-b6cc-03f8864597e5-kube-api-access-zjrvf" (OuterVolumeSpecName: "kube-api-access-zjrvf") pod "174b41a8-0a58-43f6-b6cc-03f8864597e5" (UID: "174b41a8-0a58-43f6-b6cc-03f8864597e5"). InnerVolumeSpecName "kube-api-access-zjrvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.708684 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-util" (OuterVolumeSpecName: "util") pod "174b41a8-0a58-43f6-b6cc-03f8864597e5" (UID: "174b41a8-0a58-43f6-b6cc-03f8864597e5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.797958 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjrvf\" (UniqueName: \"kubernetes.io/projected/174b41a8-0a58-43f6-b6cc-03f8864597e5-kube-api-access-zjrvf\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.797999 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:12 crc kubenswrapper[4853]: I1209 17:09:12.798015 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/174b41a8-0a58-43f6-b6cc-03f8864597e5-util\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:13 crc kubenswrapper[4853]: I1209 17:09:13.451575 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" event={"ID":"174b41a8-0a58-43f6-b6cc-03f8864597e5","Type":"ContainerDied","Data":"08b71fe16c9065bc6c9d9323d08c88c845e5fdb73b81b19c6c30c31b264d99d2"} Dec 09 17:09:13 crc kubenswrapper[4853]: I1209 17:09:13.451637 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b71fe16c9065bc6c9d9323d08c88c845e5fdb73b81b19c6c30c31b264d99d2" Dec 09 17:09:13 crc kubenswrapper[4853]: I1209 17:09:13.451650 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z" Dec 09 17:09:17 crc kubenswrapper[4853]: I1209 17:09:17.194427 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fzlgt"] Dec 09 17:09:17 crc kubenswrapper[4853]: I1209 17:09:17.195145 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovn-controller" containerID="cri-o://a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d" gracePeriod=30 Dec 09 17:09:17 crc kubenswrapper[4853]: I1209 17:09:17.195188 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="nbdb" containerID="cri-o://ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e" gracePeriod=30 Dec 09 17:09:17 crc kubenswrapper[4853]: I1209 17:09:17.195257 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="sbdb" containerID="cri-o://46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6" gracePeriod=30 Dec 09 17:09:17 crc kubenswrapper[4853]: I1209 17:09:17.195266 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovn-acl-logging" containerID="cri-o://8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22" gracePeriod=30 Dec 09 17:09:17 crc kubenswrapper[4853]: I1209 17:09:17.195297 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="northd" containerID="cri-o://b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191" gracePeriod=30 Dec 09 17:09:17 crc kubenswrapper[4853]: I1209 17:09:17.195325 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540" gracePeriod=30 Dec 09 17:09:17 crc kubenswrapper[4853]: I1209 17:09:17.195303 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kube-rbac-proxy-node" containerID="cri-o://00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7" gracePeriod=30 Dec 09 17:09:17 crc kubenswrapper[4853]: I1209 17:09:17.239345 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" containerID="cri-o://507980d98ddb2b0da1d57c39f0786848bad044537478316a247f8a4f48fdcdc5" gracePeriod=30 Dec 09 17:09:17 crc kubenswrapper[4853]: E1209 17:09:17.365582 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 09 17:09:17 crc kubenswrapper[4853]: E1209 17:09:17.365775 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 09 17:09:17 crc kubenswrapper[4853]: E1209 17:09:17.367913 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 09 17:09:17 crc kubenswrapper[4853]: E1209 17:09:17.368140 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 09 17:09:17 crc kubenswrapper[4853]: E1209 17:09:17.374296 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 09 17:09:17 crc kubenswrapper[4853]: E1209 17:09:17.374383 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="sbdb" Dec 09 17:09:17 crc kubenswrapper[4853]: E1209 17:09:17.376785 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 09 17:09:17 crc kubenswrapper[4853]: E1209 17:09:17.376842 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="nbdb" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.487261 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmrzg_8b02f072-d8cc-4c46-8159-fe99d19b24a6/kube-multus/2.log" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.488156 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmrzg_8b02f072-d8cc-4c46-8159-fe99d19b24a6/kube-multus/1.log" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.488257 4853 generic.go:334] "Generic (PLEG): container finished" podID="8b02f072-d8cc-4c46-8159-fe99d19b24a6" containerID="7ada34554e8bab61755e7d0175d3ce2d43142a6cca373bc8134e19cf7596691c" exitCode=2 Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.488331 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmrzg" event={"ID":"8b02f072-d8cc-4c46-8159-fe99d19b24a6","Type":"ContainerDied","Data":"7ada34554e8bab61755e7d0175d3ce2d43142a6cca373bc8134e19cf7596691c"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.488384 4853 scope.go:117] "RemoveContainer" containerID="3d00976ac5c59173b8cac0ed2e081fca78b41cf512961825ca8647de33751384" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.488967 4853 scope.go:117] "RemoveContainer" containerID="7ada34554e8bab61755e7d0175d3ce2d43142a6cca373bc8134e19cf7596691c" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.490910 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovnkube-controller/3.log" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.504197 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovn-acl-logging/0.log" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.506901 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovn-controller/0.log" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510034 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="507980d98ddb2b0da1d57c39f0786848bad044537478316a247f8a4f48fdcdc5" exitCode=0 Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510141 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6" exitCode=0 Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510193 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e" exitCode=0 Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510252 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191" exitCode=0 Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510299 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540" exitCode=0 Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510104 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"507980d98ddb2b0da1d57c39f0786848bad044537478316a247f8a4f48fdcdc5"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510347 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7" exitCode=0 Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510441 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510475 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22" exitCode=143 Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510498 4853 generic.go:334] "Generic (PLEG): container finished" podID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerID="a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d" exitCode=143 Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510497 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510541 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510553 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510563 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510572 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510581 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510590 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" event={"ID":"f18ca0bf-dc49-4000-97e9-9a64adac54de","Type":"ContainerDied","Data":"8610dbc708f30c46e7f79b376a60751703324f95922ce5b64bd15ee9d73de750"} Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.510613 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8610dbc708f30c46e7f79b376a60751703324f95922ce5b64bd15ee9d73de750" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.521171 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.521408 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.575427 4853 scope.go:117] "RemoveContainer" containerID="c93022f46dbc4cea54961029fa87d362525f3257e8b5830a59c9dbf516a3a78b" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.581153 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.613802 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovn-acl-logging/0.log" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.615763 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovn-controller/0.log" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.618265 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697193 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c6zld"] Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697421 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="sbdb" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697437 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="sbdb" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697449 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697455 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697464 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerName="extract" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697469 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerName="extract" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697476 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kubecfg-setup" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697481 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kubecfg-setup" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697491 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerName="util" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697499 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerName="util" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697507 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697513 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697521 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697526 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697534 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697541 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697547 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697553 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697560 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovn-acl-logging" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697566 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovn-acl-logging" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697580 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kube-rbac-proxy-node" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697585 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kube-rbac-proxy-node" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697606 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovn-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697611 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovn-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697619 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="nbdb" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697624 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="nbdb" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697632 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerName="pull" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697637 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerName="pull" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697646 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="northd" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697651 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="northd" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697743 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="nbdb" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697750 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovn-acl-logging" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697759 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="174b41a8-0a58-43f6-b6cc-03f8864597e5" containerName="extract" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697769 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697776 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697799 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697808 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovn-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697816 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kube-rbac-proxy-node" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697824 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="sbdb" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697831 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697839 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="northd" Dec 09 17:09:18 crc kubenswrapper[4853]: E1209 17:09:18.697937 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.697944 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.698046 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.698055 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" containerName="ovnkube-controller" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.699704 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.783537 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-env-overrides\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.783607 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.783638 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-systemd\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.783723 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-ovn\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.783734 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.783793 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.783875 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-config\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.783989 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784090 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-kubelet\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784126 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784141 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-slash\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784154 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784183 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-bin\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784201 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-slash" (OuterVolumeSpecName: "host-slash") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784212 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784205 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-script-lib\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784250 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-openvswitch\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784273 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-netns\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784292 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-ovn-kubernetes\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784309 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784311 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fmxt\" (UniqueName: \"kubernetes.io/projected/f18ca0bf-dc49-4000-97e9-9a64adac54de-kube-api-access-8fmxt\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784344 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784375 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784375 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-var-lib-openvswitch\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784419 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-log-socket\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784447 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-netd\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784461 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-etc-openvswitch\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784481 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovn-node-metrics-cert\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784502 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-node-log\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784510 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784521 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784533 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784546 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-log-socket" (OuterVolumeSpecName: "log-socket") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784553 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-node-log" (OuterVolumeSpecName: "node-log") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784558 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784518 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-systemd-units\") pod \"f18ca0bf-dc49-4000-97e9-9a64adac54de\" (UID: \"f18ca0bf-dc49-4000-97e9-9a64adac54de\") " Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784581 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784759 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784796 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-slash\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784817 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-run-ovn-kubernetes\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784855 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-run-systemd\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784884 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-node-log\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784948 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-var-lib-openvswitch\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784967 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-kubelet\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.784991 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-run-netns\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785006 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-cni-bin\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785022 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-ovnkube-config\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785042 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-run-openvswitch\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785065 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-systemd-units\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785088 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-cni-netd\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785109 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-log-socket\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785122 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-env-overrides\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785143 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-run-ovn\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785162 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-ovnkube-script-lib\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785209 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-ovn-node-metrics-cert\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785224 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbg9n\" (UniqueName: \"kubernetes.io/projected/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-kube-api-access-bbg9n\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785247 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-etc-openvswitch\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785301 4853 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-slash\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785312 4853 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785321 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785330 4853 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785339 4853 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785348 4853 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785357 4853 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785365 4853 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-log-socket\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785373 4853 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785381 4853 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785389 4853 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-node-log\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785397 4853 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785405 4853 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785413 4853 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785421 4853 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785430 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.785470 4853 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.789550 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18ca0bf-dc49-4000-97e9-9a64adac54de-kube-api-access-8fmxt" (OuterVolumeSpecName: "kube-api-access-8fmxt") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "kube-api-access-8fmxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.797733 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.798978 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f18ca0bf-dc49-4000-97e9-9a64adac54de" (UID: "f18ca0bf-dc49-4000-97e9-9a64adac54de"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886495 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-etc-openvswitch\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886548 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-slash\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886587 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-run-ovn-kubernetes\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886626 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-run-systemd\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886645 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-node-log\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886675 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-var-lib-openvswitch\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886691 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-kubelet\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-run-netns\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886724 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-cni-bin\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886738 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-ovnkube-config\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886756 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-run-openvswitch\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886774 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-systemd-units\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886793 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-cni-netd\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886809 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-log-socket\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886822 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-env-overrides\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886840 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-run-ovn\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886866 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-ovnkube-script-lib\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886893 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-ovn-node-metrics-cert\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886908 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbg9n\" (UniqueName: \"kubernetes.io/projected/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-kube-api-access-bbg9n\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886942 4853 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f18ca0bf-dc49-4000-97e9-9a64adac54de-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886954 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fmxt\" (UniqueName: \"kubernetes.io/projected/f18ca0bf-dc49-4000-97e9-9a64adac54de-kube-api-access-8fmxt\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.886964 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f18ca0bf-dc49-4000-97e9-9a64adac54de-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887250 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-etc-openvswitch\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887299 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-cni-bin\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887333 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-node-log\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887356 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-var-lib-openvswitch\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887376 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-kubelet\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887398 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-run-netns\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-run-systemd\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-run-ovn\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887462 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-systemd-units\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887481 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-slash\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887497 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-run-ovn-kubernetes\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887473 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-cni-netd\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887498 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887541 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-run-openvswitch\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887815 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-env-overrides\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.887844 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-ovnkube-config\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.888061 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-ovnkube-script-lib\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.888121 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-log-socket\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.892117 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-ovn-node-metrics-cert\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:18 crc kubenswrapper[4853]: I1209 17:09:18.921148 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbg9n\" (UniqueName: \"kubernetes.io/projected/d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d-kube-api-access-bbg9n\") pod \"ovnkube-node-c6zld\" (UID: \"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d\") " pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.015327 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:19 crc kubenswrapper[4853]: W1209 17:09:19.032129 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7bf74eb_dfc5_438a_a9c1_bc46ba31c51d.slice/crio-0e092e5dc6eafcdcf4749d93503216722152248ed2e08791da659f34361cdc11 WatchSource:0}: Error finding container 0e092e5dc6eafcdcf4749d93503216722152248ed2e08791da659f34361cdc11: Status 404 returned error can't find the container with id 0e092e5dc6eafcdcf4749d93503216722152248ed2e08791da659f34361cdc11 Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.517116 4853 generic.go:334] "Generic (PLEG): container finished" podID="d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d" containerID="27b5616d66cedb1a53914b22ba95547d5aaa99b9217c9872c3bf4e4e89cc145f" exitCode=0 Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.517351 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerDied","Data":"27b5616d66cedb1a53914b22ba95547d5aaa99b9217c9872c3bf4e4e89cc145f"} Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.517375 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerStarted","Data":"0e092e5dc6eafcdcf4749d93503216722152248ed2e08791da659f34361cdc11"} Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.520324 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fmrzg_8b02f072-d8cc-4c46-8159-fe99d19b24a6/kube-multus/2.log" Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.533767 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fmrzg" event={"ID":"8b02f072-d8cc-4c46-8159-fe99d19b24a6","Type":"ContainerStarted","Data":"8ac6f2d733fc2ffae1ef1751dd6850b09cde09c73da8fca7e61a5981ac099708"} Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.537190 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovn-acl-logging/0.log" Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.537780 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fzlgt_f18ca0bf-dc49-4000-97e9-9a64adac54de/ovn-controller/0.log" Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.538310 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fzlgt" Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.625593 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.673792 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fzlgt"] Dec 09 17:09:19 crc kubenswrapper[4853]: I1209 17:09:19.681197 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fzlgt"] Dec 09 17:09:20 crc kubenswrapper[4853]: I1209 17:09:20.580662 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerStarted","Data":"8e0eeeb6cc41407ded8e16d90dc9d9ed103fbc272f237092140fb028d8507680"} Dec 09 17:09:20 crc kubenswrapper[4853]: I1209 17:09:20.580903 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerStarted","Data":"566b07042d64059333321134ecd9f3758015cd1b2c1069ad7558c29112911c06"} Dec 09 17:09:20 crc kubenswrapper[4853]: I1209 17:09:20.580914 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerStarted","Data":"ac58e224f8303779bba899946a9e9f5e9eb5a0590b4346fee7d213651a16b2db"} Dec 09 17:09:20 crc kubenswrapper[4853]: I1209 17:09:20.580922 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerStarted","Data":"e473ebcba2177d0b42b79f5bcf9f93960e2ba09524daeac8b1d32d22c46ec044"} Dec 09 17:09:20 crc kubenswrapper[4853]: I1209 17:09:20.580929 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerStarted","Data":"3b4f3588b0484712ccb094e054e9993fdac6027b91d35b1e120161cdf4b3aeaf"} Dec 09 17:09:20 crc kubenswrapper[4853]: I1209 17:09:20.580937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerStarted","Data":"0c31d49d0f8de46263651a729c253a5af729a1b5a0098350f6162b74b7d96e34"} Dec 09 17:09:20 crc kubenswrapper[4853]: I1209 17:09:20.755457 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z966d"] Dec 09 17:09:21 crc kubenswrapper[4853]: I1209 17:09:21.573812 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18ca0bf-dc49-4000-97e9-9a64adac54de" path="/var/lib/kubelet/pods/f18ca0bf-dc49-4000-97e9-9a64adac54de/volumes" Dec 09 17:09:21 crc kubenswrapper[4853]: I1209 17:09:21.584901 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z966d" podUID="b472ecf1-3746-457c-9101-14f607d89e17" containerName="registry-server" containerID="cri-o://4a6100ef2b127ceee3ede6b23f79601a913c832247199ea05d31145bf09bfcd2" gracePeriod=2 Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.833259 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp"] Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.834813 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.836345 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-8kv4q" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.836950 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.837152 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.851803 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk7bg\" (UniqueName: \"kubernetes.io/projected/04755536-2551-4375-9e7c-1b901b498f8b-kube-api-access-vk7bg\") pod \"obo-prometheus-operator-668cf9dfbb-pkncp\" (UID: \"04755536-2551-4375-9e7c-1b901b498f8b\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.950231 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw"] Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.951113 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.952723 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7bg\" (UniqueName: \"kubernetes.io/projected/04755536-2551-4375-9e7c-1b901b498f8b-kube-api-access-vk7bg\") pod \"obo-prometheus-operator-668cf9dfbb-pkncp\" (UID: \"04755536-2551-4375-9e7c-1b901b498f8b\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.952815 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d97a2cbf-13a7-4985-a22b-aa4cd04d192c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw\" (UID: \"d97a2cbf-13a7-4985-a22b-aa4cd04d192c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.952890 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d97a2cbf-13a7-4985-a22b-aa4cd04d192c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw\" (UID: \"d97a2cbf-13a7-4985-a22b-aa4cd04d192c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.953095 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-9mgjp" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.953699 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.964317 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn"] Dec 09 17:09:23 crc kubenswrapper[4853]: I1209 17:09:23.965211 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.006136 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7bg\" (UniqueName: \"kubernetes.io/projected/04755536-2551-4375-9e7c-1b901b498f8b-kube-api-access-vk7bg\") pod \"obo-prometheus-operator-668cf9dfbb-pkncp\" (UID: \"04755536-2551-4375-9e7c-1b901b498f8b\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.053610 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d97a2cbf-13a7-4985-a22b-aa4cd04d192c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw\" (UID: \"d97a2cbf-13a7-4985-a22b-aa4cd04d192c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.053693 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb455954-ec70-4aa6-bbf1-39354677512b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn\" (UID: \"eb455954-ec70-4aa6-bbf1-39354677512b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.053780 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb455954-ec70-4aa6-bbf1-39354677512b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn\" (UID: \"eb455954-ec70-4aa6-bbf1-39354677512b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.053824 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d97a2cbf-13a7-4985-a22b-aa4cd04d192c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw\" (UID: \"d97a2cbf-13a7-4985-a22b-aa4cd04d192c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.058368 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d97a2cbf-13a7-4985-a22b-aa4cd04d192c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw\" (UID: \"d97a2cbf-13a7-4985-a22b-aa4cd04d192c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.061127 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d97a2cbf-13a7-4985-a22b-aa4cd04d192c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw\" (UID: \"d97a2cbf-13a7-4985-a22b-aa4cd04d192c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.151609 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.164837 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb455954-ec70-4aa6-bbf1-39354677512b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn\" (UID: \"eb455954-ec70-4aa6-bbf1-39354677512b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.164938 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb455954-ec70-4aa6-bbf1-39354677512b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn\" (UID: \"eb455954-ec70-4aa6-bbf1-39354677512b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.167583 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb455954-ec70-4aa6-bbf1-39354677512b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn\" (UID: \"eb455954-ec70-4aa6-bbf1-39354677512b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.168156 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb455954-ec70-4aa6-bbf1-39354677512b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn\" (UID: \"eb455954-ec70-4aa6-bbf1-39354677512b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.174984 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators_04755536-2551-4375-9e7c-1b901b498f8b_0(e3442b5f61b918c4e36761324fc80d9d684ac8c30d304a29df97ea03136da46f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.175057 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators_04755536-2551-4375-9e7c-1b901b498f8b_0(e3442b5f61b918c4e36761324fc80d9d684ac8c30d304a29df97ea03136da46f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.175078 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators_04755536-2551-4375-9e7c-1b901b498f8b_0(e3442b5f61b918c4e36761324fc80d9d684ac8c30d304a29df97ea03136da46f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.175121 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators(04755536-2551-4375-9e7c-1b901b498f8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators(04755536-2551-4375-9e7c-1b901b498f8b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators_04755536-2551-4375-9e7c-1b901b498f8b_0(e3442b5f61b918c4e36761324fc80d9d684ac8c30d304a29df97ea03136da46f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" podUID="04755536-2551-4375-9e7c-1b901b498f8b" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.191791 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-q4nfc"] Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.192569 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.195475 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-hmpth" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.199047 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.265868 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.266286 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b0f2e0b-84ec-4b60-afa2-4f090a35596d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-q4nfc\" (UID: \"1b0f2e0b-84ec-4b60-afa2-4f090a35596d\") " pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.266409 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52s5\" (UniqueName: \"kubernetes.io/projected/1b0f2e0b-84ec-4b60-afa2-4f090a35596d-kube-api-access-b52s5\") pod \"observability-operator-d8bb48f5d-q4nfc\" (UID: \"1b0f2e0b-84ec-4b60-afa2-4f090a35596d\") " pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.281967 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.294811 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators_d97a2cbf-13a7-4985-a22b-aa4cd04d192c_0(efa33e2a77bc2c2476f5bcaeef32e3f2b7bf2a09bddc94b0a32298bec6521c0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.294865 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators_d97a2cbf-13a7-4985-a22b-aa4cd04d192c_0(efa33e2a77bc2c2476f5bcaeef32e3f2b7bf2a09bddc94b0a32298bec6521c0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.294888 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators_d97a2cbf-13a7-4985-a22b-aa4cd04d192c_0(efa33e2a77bc2c2476f5bcaeef32e3f2b7bf2a09bddc94b0a32298bec6521c0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.294929 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators(d97a2cbf-13a7-4985-a22b-aa4cd04d192c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators(d97a2cbf-13a7-4985-a22b-aa4cd04d192c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators_d97a2cbf-13a7-4985-a22b-aa4cd04d192c_0(efa33e2a77bc2c2476f5bcaeef32e3f2b7bf2a09bddc94b0a32298bec6521c0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" podUID="d97a2cbf-13a7-4985-a22b-aa4cd04d192c" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.315363 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators_eb455954-ec70-4aa6-bbf1-39354677512b_0(412f1d5fe4ed1dcc4bfc076a7c40e1812f63b700804d47bf22f94a4bf5843ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.315424 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators_eb455954-ec70-4aa6-bbf1-39354677512b_0(412f1d5fe4ed1dcc4bfc076a7c40e1812f63b700804d47bf22f94a4bf5843ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.315444 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators_eb455954-ec70-4aa6-bbf1-39354677512b_0(412f1d5fe4ed1dcc4bfc076a7c40e1812f63b700804d47bf22f94a4bf5843ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.315485 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators(eb455954-ec70-4aa6-bbf1-39354677512b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators(eb455954-ec70-4aa6-bbf1-39354677512b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators_eb455954-ec70-4aa6-bbf1-39354677512b_0(412f1d5fe4ed1dcc4bfc076a7c40e1812f63b700804d47bf22f94a4bf5843ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" podUID="eb455954-ec70-4aa6-bbf1-39354677512b" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.359884 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-wclnd"] Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.360777 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.362849 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-jmhg9" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.367407 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b0f2e0b-84ec-4b60-afa2-4f090a35596d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-q4nfc\" (UID: \"1b0f2e0b-84ec-4b60-afa2-4f090a35596d\") " pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.367995 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b52s5\" (UniqueName: \"kubernetes.io/projected/1b0f2e0b-84ec-4b60-afa2-4f090a35596d-kube-api-access-b52s5\") pod \"observability-operator-d8bb48f5d-q4nfc\" (UID: \"1b0f2e0b-84ec-4b60-afa2-4f090a35596d\") " pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.373436 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b0f2e0b-84ec-4b60-afa2-4f090a35596d-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-q4nfc\" (UID: \"1b0f2e0b-84ec-4b60-afa2-4f090a35596d\") " pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.391378 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b52s5\" (UniqueName: \"kubernetes.io/projected/1b0f2e0b-84ec-4b60-afa2-4f090a35596d-kube-api-access-b52s5\") pod \"observability-operator-d8bb48f5d-q4nfc\" (UID: \"1b0f2e0b-84ec-4b60-afa2-4f090a35596d\") " pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.469324 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6s89\" (UniqueName: \"kubernetes.io/projected/37fdfb11-d235-458b-8963-bf7cb3a9b589-kube-api-access-t6s89\") pod \"perses-operator-5446b9c989-wclnd\" (UID: \"37fdfb11-d235-458b-8963-bf7cb3a9b589\") " pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.469578 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/37fdfb11-d235-458b-8963-bf7cb3a9b589-openshift-service-ca\") pod \"perses-operator-5446b9c989-wclnd\" (UID: \"37fdfb11-d235-458b-8963-bf7cb3a9b589\") " pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.506432 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.524732 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-q4nfc_openshift-operators_1b0f2e0b-84ec-4b60-afa2-4f090a35596d_0(21bb88ceebf3f7bee86f9b8aedeb564bcb6c973cfa32c36346412248e2d047ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.524793 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-q4nfc_openshift-operators_1b0f2e0b-84ec-4b60-afa2-4f090a35596d_0(21bb88ceebf3f7bee86f9b8aedeb564bcb6c973cfa32c36346412248e2d047ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.524812 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-q4nfc_openshift-operators_1b0f2e0b-84ec-4b60-afa2-4f090a35596d_0(21bb88ceebf3f7bee86f9b8aedeb564bcb6c973cfa32c36346412248e2d047ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.524855 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-q4nfc_openshift-operators(1b0f2e0b-84ec-4b60-afa2-4f090a35596d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-q4nfc_openshift-operators(1b0f2e0b-84ec-4b60-afa2-4f090a35596d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-q4nfc_openshift-operators_1b0f2e0b-84ec-4b60-afa2-4f090a35596d_0(21bb88ceebf3f7bee86f9b8aedeb564bcb6c973cfa32c36346412248e2d047ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" podUID="1b0f2e0b-84ec-4b60-afa2-4f090a35596d" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.570816 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/37fdfb11-d235-458b-8963-bf7cb3a9b589-openshift-service-ca\") pod \"perses-operator-5446b9c989-wclnd\" (UID: \"37fdfb11-d235-458b-8963-bf7cb3a9b589\") " pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.571460 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6s89\" (UniqueName: \"kubernetes.io/projected/37fdfb11-d235-458b-8963-bf7cb3a9b589-kube-api-access-t6s89\") pod \"perses-operator-5446b9c989-wclnd\" (UID: \"37fdfb11-d235-458b-8963-bf7cb3a9b589\") " pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.572000 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/37fdfb11-d235-458b-8963-bf7cb3a9b589-openshift-service-ca\") pod \"perses-operator-5446b9c989-wclnd\" (UID: \"37fdfb11-d235-458b-8963-bf7cb3a9b589\") " pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.590255 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6s89\" (UniqueName: \"kubernetes.io/projected/37fdfb11-d235-458b-8963-bf7cb3a9b589-kube-api-access-t6s89\") pod \"perses-operator-5446b9c989-wclnd\" (UID: \"37fdfb11-d235-458b-8963-bf7cb3a9b589\") " pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: I1209 17:09:24.677795 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.702106 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-wclnd_openshift-operators_37fdfb11-d235-458b-8963-bf7cb3a9b589_0(f9786c5d6bed99a4079e4657f3327834467ed12bc6c4c91fc3b216afc0233af8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.702177 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-wclnd_openshift-operators_37fdfb11-d235-458b-8963-bf7cb3a9b589_0(f9786c5d6bed99a4079e4657f3327834467ed12bc6c4c91fc3b216afc0233af8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.702201 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-wclnd_openshift-operators_37fdfb11-d235-458b-8963-bf7cb3a9b589_0(f9786c5d6bed99a4079e4657f3327834467ed12bc6c4c91fc3b216afc0233af8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:24 crc kubenswrapper[4853]: E1209 17:09:24.702250 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-wclnd_openshift-operators(37fdfb11-d235-458b-8963-bf7cb3a9b589)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-wclnd_openshift-operators(37fdfb11-d235-458b-8963-bf7cb3a9b589)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-wclnd_openshift-operators_37fdfb11-d235-458b-8963-bf7cb3a9b589_0(f9786c5d6bed99a4079e4657f3327834467ed12bc6c4c91fc3b216afc0233af8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-wclnd" podUID="37fdfb11-d235-458b-8963-bf7cb3a9b589" Dec 09 17:09:25 crc kubenswrapper[4853]: I1209 17:09:25.621692 4853 generic.go:334] "Generic (PLEG): container finished" podID="b472ecf1-3746-457c-9101-14f607d89e17" containerID="4a6100ef2b127ceee3ede6b23f79601a913c832247199ea05d31145bf09bfcd2" exitCode=0 Dec 09 17:09:25 crc kubenswrapper[4853]: I1209 17:09:25.621782 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z966d" event={"ID":"b472ecf1-3746-457c-9101-14f607d89e17","Type":"ContainerDied","Data":"4a6100ef2b127ceee3ede6b23f79601a913c832247199ea05d31145bf09bfcd2"} Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.516290 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.632929 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerStarted","Data":"e6daf805597b0d61e3eea63ebad7de7ed5d91df6d2f9f86ee6aa8bdc9f3863c8"} Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.635108 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z966d" event={"ID":"b472ecf1-3746-457c-9101-14f607d89e17","Type":"ContainerDied","Data":"530f47111d543813d977e9d46d7a8cc03f04e0fb4e7a47181c35c12d69f512a2"} Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.635146 4853 scope.go:117] "RemoveContainer" containerID="4a6100ef2b127ceee3ede6b23f79601a913c832247199ea05d31145bf09bfcd2" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.635180 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z966d" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.662655 4853 scope.go:117] "RemoveContainer" containerID="2a3fd739dee02efe8e08a51f7b7f62b992be5efdb2cf64c3658b31145a6cf73a" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.675834 4853 scope.go:117] "RemoveContainer" containerID="bd2e1c75083b97cbdf1c65f807901d45c3d6a52b00d833af8ec14be5b2507926" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.701760 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wllsh\" (UniqueName: \"kubernetes.io/projected/b472ecf1-3746-457c-9101-14f607d89e17-kube-api-access-wllsh\") pod \"b472ecf1-3746-457c-9101-14f607d89e17\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.701810 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-catalog-content\") pod \"b472ecf1-3746-457c-9101-14f607d89e17\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.701910 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-utilities\") pod \"b472ecf1-3746-457c-9101-14f607d89e17\" (UID: \"b472ecf1-3746-457c-9101-14f607d89e17\") " Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.702669 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-utilities" (OuterVolumeSpecName: "utilities") pod "b472ecf1-3746-457c-9101-14f607d89e17" (UID: "b472ecf1-3746-457c-9101-14f607d89e17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.708782 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b472ecf1-3746-457c-9101-14f607d89e17-kube-api-access-wllsh" (OuterVolumeSpecName: "kube-api-access-wllsh") pod "b472ecf1-3746-457c-9101-14f607d89e17" (UID: "b472ecf1-3746-457c-9101-14f607d89e17"). InnerVolumeSpecName "kube-api-access-wllsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.803152 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wllsh\" (UniqueName: \"kubernetes.io/projected/b472ecf1-3746-457c-9101-14f607d89e17-kube-api-access-wllsh\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.803189 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.816184 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b472ecf1-3746-457c-9101-14f607d89e17" (UID: "b472ecf1-3746-457c-9101-14f607d89e17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.904147 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b472ecf1-3746-457c-9101-14f607d89e17-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.976416 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z966d"] Dec 09 17:09:26 crc kubenswrapper[4853]: I1209 17:09:26.982863 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z966d"] Dec 09 17:09:27 crc kubenswrapper[4853]: I1209 17:09:27.575915 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b472ecf1-3746-457c-9101-14f607d89e17" path="/var/lib/kubelet/pods/b472ecf1-3746-457c-9101-14f607d89e17/volumes" Dec 09 17:09:28 crc kubenswrapper[4853]: I1209 17:09:28.593326 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:09:28 crc kubenswrapper[4853]: I1209 17:09:28.593376 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.651252 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-wclnd"] Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.652051 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.652671 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.661900 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" event={"ID":"d7bf74eb-dfc5-438a-a9c1-bc46ba31c51d","Type":"ContainerStarted","Data":"24a7dce24e11db51e3ecbb1e1936d2138246757fb77313794a6248ee1e50757e"} Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.662519 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.662685 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.662703 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.663658 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp"] Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.663839 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.664298 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.676800 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw"] Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.676911 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.677385 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.694921 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn"] Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.695068 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.695519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.699237 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-q4nfc"] Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.699341 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.699808 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.709919 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.720682 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:29 crc kubenswrapper[4853]: I1209 17:09:29.721522 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" podStartSLOduration=11.721509628 podStartE2EDuration="11.721509628s" podCreationTimestamp="2025-12-09 17:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:09:29.721127577 +0000 UTC m=+796.655866759" watchObservedRunningTime="2025-12-09 17:09:29.721509628 +0000 UTC m=+796.656248810" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.762989 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-wclnd_openshift-operators_37fdfb11-d235-458b-8963-bf7cb3a9b589_0(7e5aa8901f2b7224a2f0d12ae975fbbfbbef93572473e2651875fe87ff116528): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.763046 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-wclnd_openshift-operators_37fdfb11-d235-458b-8963-bf7cb3a9b589_0(7e5aa8901f2b7224a2f0d12ae975fbbfbbef93572473e2651875fe87ff116528): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.763067 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-wclnd_openshift-operators_37fdfb11-d235-458b-8963-bf7cb3a9b589_0(7e5aa8901f2b7224a2f0d12ae975fbbfbbef93572473e2651875fe87ff116528): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.763106 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-wclnd_openshift-operators(37fdfb11-d235-458b-8963-bf7cb3a9b589)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-wclnd_openshift-operators(37fdfb11-d235-458b-8963-bf7cb3a9b589)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-wclnd_openshift-operators_37fdfb11-d235-458b-8963-bf7cb3a9b589_0(7e5aa8901f2b7224a2f0d12ae975fbbfbbef93572473e2651875fe87ff116528): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-wclnd" podUID="37fdfb11-d235-458b-8963-bf7cb3a9b589" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.787964 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators_04755536-2551-4375-9e7c-1b901b498f8b_0(6f6e93f3415dd28e81ff3ec3474b3cd35f1babb2335aca5357ab1ca17f2e2e2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.788036 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators_04755536-2551-4375-9e7c-1b901b498f8b_0(6f6e93f3415dd28e81ff3ec3474b3cd35f1babb2335aca5357ab1ca17f2e2e2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.788065 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators_04755536-2551-4375-9e7c-1b901b498f8b_0(6f6e93f3415dd28e81ff3ec3474b3cd35f1babb2335aca5357ab1ca17f2e2e2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.788113 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators(04755536-2551-4375-9e7c-1b901b498f8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators(04755536-2551-4375-9e7c-1b901b498f8b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-pkncp_openshift-operators_04755536-2551-4375-9e7c-1b901b498f8b_0(6f6e93f3415dd28e81ff3ec3474b3cd35f1babb2335aca5357ab1ca17f2e2e2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" podUID="04755536-2551-4375-9e7c-1b901b498f8b" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.817412 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators_d97a2cbf-13a7-4985-a22b-aa4cd04d192c_0(94d2b21422be5451989e07a3e243c4b3ec17918ee3ff7c138e2096330b7bdbc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.817475 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators_d97a2cbf-13a7-4985-a22b-aa4cd04d192c_0(94d2b21422be5451989e07a3e243c4b3ec17918ee3ff7c138e2096330b7bdbc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.817500 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators_d97a2cbf-13a7-4985-a22b-aa4cd04d192c_0(94d2b21422be5451989e07a3e243c4b3ec17918ee3ff7c138e2096330b7bdbc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.817550 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators(d97a2cbf-13a7-4985-a22b-aa4cd04d192c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators(d97a2cbf-13a7-4985-a22b-aa4cd04d192c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_openshift-operators_d97a2cbf-13a7-4985-a22b-aa4cd04d192c_0(94d2b21422be5451989e07a3e243c4b3ec17918ee3ff7c138e2096330b7bdbc8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" podUID="d97a2cbf-13a7-4985-a22b-aa4cd04d192c" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.828940 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-q4nfc_openshift-operators_1b0f2e0b-84ec-4b60-afa2-4f090a35596d_0(eed4a6c5e5468f6db485d31ae3f5d99f941e5c65c309b15c849c58537387bcf5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.829030 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-q4nfc_openshift-operators_1b0f2e0b-84ec-4b60-afa2-4f090a35596d_0(eed4a6c5e5468f6db485d31ae3f5d99f941e5c65c309b15c849c58537387bcf5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.829055 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-q4nfc_openshift-operators_1b0f2e0b-84ec-4b60-afa2-4f090a35596d_0(eed4a6c5e5468f6db485d31ae3f5d99f941e5c65c309b15c849c58537387bcf5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.829124 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-q4nfc_openshift-operators(1b0f2e0b-84ec-4b60-afa2-4f090a35596d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-q4nfc_openshift-operators(1b0f2e0b-84ec-4b60-afa2-4f090a35596d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-q4nfc_openshift-operators_1b0f2e0b-84ec-4b60-afa2-4f090a35596d_0(eed4a6c5e5468f6db485d31ae3f5d99f941e5c65c309b15c849c58537387bcf5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" podUID="1b0f2e0b-84ec-4b60-afa2-4f090a35596d" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.838273 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators_eb455954-ec70-4aa6-bbf1-39354677512b_0(1ec023f2221b183dc0ab064ef8285ebfef98484f83ccecbe73627b94e28f4e20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.838336 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators_eb455954-ec70-4aa6-bbf1-39354677512b_0(1ec023f2221b183dc0ab064ef8285ebfef98484f83ccecbe73627b94e28f4e20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.838358 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators_eb455954-ec70-4aa6-bbf1-39354677512b_0(1ec023f2221b183dc0ab064ef8285ebfef98484f83ccecbe73627b94e28f4e20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:29 crc kubenswrapper[4853]: E1209 17:09:29.838417 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators(eb455954-ec70-4aa6-bbf1-39354677512b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators(eb455954-ec70-4aa6-bbf1-39354677512b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators_eb455954-ec70-4aa6-bbf1-39354677512b_0(1ec023f2221b183dc0ab064ef8285ebfef98484f83ccecbe73627b94e28f4e20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" podUID="eb455954-ec70-4aa6-bbf1-39354677512b" Dec 09 17:09:41 crc kubenswrapper[4853]: I1209 17:09:41.566518 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:41 crc kubenswrapper[4853]: I1209 17:09:41.566625 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:41 crc kubenswrapper[4853]: I1209 17:09:41.567970 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:09:41 crc kubenswrapper[4853]: I1209 17:09:41.567997 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" Dec 09 17:09:41 crc kubenswrapper[4853]: I1209 17:09:41.886448 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp"] Dec 09 17:09:42 crc kubenswrapper[4853]: W1209 17:09:42.043329 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37fdfb11_d235_458b_8963_bf7cb3a9b589.slice/crio-a2412473f160546b39892d9121419beca2d4052e00cbf6a0b16a6ad0f9221692 WatchSource:0}: Error finding container a2412473f160546b39892d9121419beca2d4052e00cbf6a0b16a6ad0f9221692: Status 404 returned error can't find the container with id a2412473f160546b39892d9121419beca2d4052e00cbf6a0b16a6ad0f9221692 Dec 09 17:09:42 crc kubenswrapper[4853]: I1209 17:09:42.043861 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-wclnd"] Dec 09 17:09:42 crc kubenswrapper[4853]: I1209 17:09:42.566701 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:42 crc kubenswrapper[4853]: I1209 17:09:42.566702 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:42 crc kubenswrapper[4853]: I1209 17:09:42.568017 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" Dec 09 17:09:42 crc kubenswrapper[4853]: I1209 17:09:42.568545 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:42 crc kubenswrapper[4853]: I1209 17:09:42.749178 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" event={"ID":"04755536-2551-4375-9e7c-1b901b498f8b","Type":"ContainerStarted","Data":"e54a3d1b3b0323a1e47d77b43f25023522a8e18c2b837824008ff7f9bc60ccfd"} Dec 09 17:09:42 crc kubenswrapper[4853]: I1209 17:09:42.750719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-wclnd" event={"ID":"37fdfb11-d235-458b-8963-bf7cb3a9b589","Type":"ContainerStarted","Data":"a2412473f160546b39892d9121419beca2d4052e00cbf6a0b16a6ad0f9221692"} Dec 09 17:09:42 crc kubenswrapper[4853]: I1209 17:09:42.981022 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-q4nfc"] Dec 09 17:09:42 crc kubenswrapper[4853]: W1209 17:09:42.984369 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b0f2e0b_84ec_4b60_afa2_4f090a35596d.slice/crio-758aaab4c2f06277f58be46c66e1bb43fed37344c88bbcf22b1667657e0f1ada WatchSource:0}: Error finding container 758aaab4c2f06277f58be46c66e1bb43fed37344c88bbcf22b1667657e0f1ada: Status 404 returned error can't find the container with id 758aaab4c2f06277f58be46c66e1bb43fed37344c88bbcf22b1667657e0f1ada Dec 09 17:09:43 crc kubenswrapper[4853]: I1209 17:09:43.042289 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn"] Dec 09 17:09:43 crc kubenswrapper[4853]: I1209 17:09:43.758517 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" event={"ID":"eb455954-ec70-4aa6-bbf1-39354677512b","Type":"ContainerStarted","Data":"bff06359730d0c7cf9db40288ca2695161946f714cf446839bd9d874b95a6e59"} Dec 09 17:09:43 crc kubenswrapper[4853]: I1209 17:09:43.760759 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" event={"ID":"1b0f2e0b-84ec-4b60-afa2-4f090a35596d","Type":"ContainerStarted","Data":"758aaab4c2f06277f58be46c66e1bb43fed37344c88bbcf22b1667657e0f1ada"} Dec 09 17:09:45 crc kubenswrapper[4853]: I1209 17:09:45.566922 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:45 crc kubenswrapper[4853]: I1209 17:09:45.567741 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" Dec 09 17:09:45 crc kubenswrapper[4853]: I1209 17:09:45.788463 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw"] Dec 09 17:09:45 crc kubenswrapper[4853]: W1209 17:09:45.809264 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd97a2cbf_13a7_4985_a22b_aa4cd04d192c.slice/crio-79701783efdf59136a06e51981bb232b2148293cc7d537bc8f04b320198a236e WatchSource:0}: Error finding container 79701783efdf59136a06e51981bb232b2148293cc7d537bc8f04b320198a236e: Status 404 returned error can't find the container with id 79701783efdf59136a06e51981bb232b2148293cc7d537bc8f04b320198a236e Dec 09 17:09:46 crc kubenswrapper[4853]: I1209 17:09:46.792869 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" event={"ID":"d97a2cbf-13a7-4985-a22b-aa4cd04d192c","Type":"ContainerStarted","Data":"79701783efdf59136a06e51981bb232b2148293cc7d537bc8f04b320198a236e"} Dec 09 17:09:49 crc kubenswrapper[4853]: I1209 17:09:49.044221 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c6zld" Dec 09 17:09:56 crc kubenswrapper[4853]: E1209 17:09:56.598183 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 09 17:09:56 crc kubenswrapper[4853]: E1209 17:09:56.598866 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_openshift-operators(eb455954-ec70-4aa6-bbf1-39354677512b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 17:09:56 crc kubenswrapper[4853]: E1209 17:09:56.600038 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" podUID="eb455954-ec70-4aa6-bbf1-39354677512b" Dec 09 17:09:56 crc kubenswrapper[4853]: E1209 17:09:56.863527 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" podUID="eb455954-ec70-4aa6-bbf1-39354677512b" Dec 09 17:09:57 crc kubenswrapper[4853]: E1209 17:09:57.059344 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" Dec 09 17:09:57 crc kubenswrapper[4853]: E1209 17:09:57.059567 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6s89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5446b9c989-wclnd_openshift-operators(37fdfb11-d235-458b-8963-bf7cb3a9b589): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 17:09:57 crc kubenswrapper[4853]: E1209 17:09:57.060786 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5446b9c989-wclnd" podUID="37fdfb11-d235-458b-8963-bf7cb3a9b589" Dec 09 17:09:57 crc kubenswrapper[4853]: I1209 17:09:57.867329 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" event={"ID":"1b0f2e0b-84ec-4b60-afa2-4f090a35596d","Type":"ContainerStarted","Data":"96acccf6b7c780e11dd2c99c0268d99ff06e2f5cb1e83eafb11a0c4a13ab0f00"} Dec 09 17:09:57 crc kubenswrapper[4853]: I1209 17:09:57.868335 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:57 crc kubenswrapper[4853]: I1209 17:09:57.869673 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" event={"ID":"04755536-2551-4375-9e7c-1b901b498f8b","Type":"ContainerStarted","Data":"a2cdff54f1a7dddb8614c202b32ea01a93a994bb2fffd80e77b89d26606c5097"} Dec 09 17:09:57 crc kubenswrapper[4853]: I1209 17:09:57.872792 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" event={"ID":"d97a2cbf-13a7-4985-a22b-aa4cd04d192c","Type":"ContainerStarted","Data":"9b068ea77f7a0abd73668f5b46e4f5fe844b8909769162177ec6533f2d2960fd"} Dec 09 17:09:57 crc kubenswrapper[4853]: E1209 17:09:57.873965 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385\\\"\"" pod="openshift-operators/perses-operator-5446b9c989-wclnd" podUID="37fdfb11-d235-458b-8963-bf7cb3a9b589" Dec 09 17:09:57 crc kubenswrapper[4853]: I1209 17:09:57.889916 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" podStartSLOduration=19.74863911 podStartE2EDuration="33.889893405s" podCreationTimestamp="2025-12-09 17:09:24 +0000 UTC" firstStartedPulling="2025-12-09 17:09:42.987259334 +0000 UTC m=+809.921998516" lastFinishedPulling="2025-12-09 17:09:57.128513629 +0000 UTC m=+824.063252811" observedRunningTime="2025-12-09 17:09:57.88400834 +0000 UTC m=+824.818747532" watchObservedRunningTime="2025-12-09 17:09:57.889893405 +0000 UTC m=+824.824632587" Dec 09 17:09:57 crc kubenswrapper[4853]: I1209 17:09:57.895555 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-q4nfc" Dec 09 17:09:57 crc kubenswrapper[4853]: I1209 17:09:57.934062 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw" podStartSLOduration=23.600714734 podStartE2EDuration="34.934041294s" podCreationTimestamp="2025-12-09 17:09:23 +0000 UTC" firstStartedPulling="2025-12-09 17:09:45.812054663 +0000 UTC m=+812.746793845" lastFinishedPulling="2025-12-09 17:09:57.145381223 +0000 UTC m=+824.080120405" observedRunningTime="2025-12-09 17:09:57.929794624 +0000 UTC m=+824.864533816" watchObservedRunningTime="2025-12-09 17:09:57.934041294 +0000 UTC m=+824.868780476" Dec 09 17:09:57 crc kubenswrapper[4853]: I1209 17:09:57.960670 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-pkncp" podStartSLOduration=19.725101018 podStartE2EDuration="34.96065109s" podCreationTimestamp="2025-12-09 17:09:23 +0000 UTC" firstStartedPulling="2025-12-09 17:09:41.892974628 +0000 UTC m=+808.827713810" lastFinishedPulling="2025-12-09 17:09:57.12852468 +0000 UTC m=+824.063263882" observedRunningTime="2025-12-09 17:09:57.95884291 +0000 UTC m=+824.893582102" watchObservedRunningTime="2025-12-09 17:09:57.96065109 +0000 UTC m=+824.895390272" Dec 09 17:09:58 crc kubenswrapper[4853]: I1209 17:09:58.593649 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:09:58 crc kubenswrapper[4853]: I1209 17:09:58.593733 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.707980 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-f4lf9"] Dec 09 17:10:04 crc kubenswrapper[4853]: E1209 17:10:04.708962 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b472ecf1-3746-457c-9101-14f607d89e17" containerName="extract-utilities" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.708987 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b472ecf1-3746-457c-9101-14f607d89e17" containerName="extract-utilities" Dec 09 17:10:04 crc kubenswrapper[4853]: E1209 17:10:04.709002 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b472ecf1-3746-457c-9101-14f607d89e17" containerName="registry-server" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.709013 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b472ecf1-3746-457c-9101-14f607d89e17" containerName="registry-server" Dec 09 17:10:04 crc kubenswrapper[4853]: E1209 17:10:04.709050 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b472ecf1-3746-457c-9101-14f607d89e17" containerName="extract-content" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.709061 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b472ecf1-3746-457c-9101-14f607d89e17" containerName="extract-content" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.709264 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b472ecf1-3746-457c-9101-14f607d89e17" containerName="registry-server" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.709969 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-f4lf9" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.712014 4853 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dq489" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.712315 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.717997 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.726415 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-f4lf9"] Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.734514 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fdjh9"] Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.735376 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-fdjh9" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.738883 4853 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cqxgm" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.750365 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-b8xr7"] Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.751448 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.754894 4853 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7wp85" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.756225 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fdjh9"] Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.776746 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-b8xr7"] Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.849456 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw84l\" (UniqueName: \"kubernetes.io/projected/9548e933-9e30-4c00-9713-84238a3a557d-kube-api-access-cw84l\") pod \"cert-manager-cainjector-7f985d654d-f4lf9\" (UID: \"9548e933-9e30-4c00-9713-84238a3a557d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-f4lf9" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.849503 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmxvm\" (UniqueName: \"kubernetes.io/projected/c7f7bef8-1add-4d1a-b159-3651264fc6de-kube-api-access-dmxvm\") pod \"cert-manager-webhook-5655c58dd6-b8xr7\" (UID: \"c7f7bef8-1add-4d1a-b159-3651264fc6de\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.849625 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnqfm\" (UniqueName: \"kubernetes.io/projected/f37279b1-e5b2-4c60-a604-70931c3f028d-kube-api-access-xnqfm\") pod \"cert-manager-5b446d88c5-fdjh9\" (UID: \"f37279b1-e5b2-4c60-a604-70931c3f028d\") " pod="cert-manager/cert-manager-5b446d88c5-fdjh9" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.950966 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnqfm\" (UniqueName: \"kubernetes.io/projected/f37279b1-e5b2-4c60-a604-70931c3f028d-kube-api-access-xnqfm\") pod \"cert-manager-5b446d88c5-fdjh9\" (UID: \"f37279b1-e5b2-4c60-a604-70931c3f028d\") " pod="cert-manager/cert-manager-5b446d88c5-fdjh9" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.951088 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw84l\" (UniqueName: \"kubernetes.io/projected/9548e933-9e30-4c00-9713-84238a3a557d-kube-api-access-cw84l\") pod \"cert-manager-cainjector-7f985d654d-f4lf9\" (UID: \"9548e933-9e30-4c00-9713-84238a3a557d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-f4lf9" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.951120 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmxvm\" (UniqueName: \"kubernetes.io/projected/c7f7bef8-1add-4d1a-b159-3651264fc6de-kube-api-access-dmxvm\") pod \"cert-manager-webhook-5655c58dd6-b8xr7\" (UID: \"c7f7bef8-1add-4d1a-b159-3651264fc6de\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.968380 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmxvm\" (UniqueName: \"kubernetes.io/projected/c7f7bef8-1add-4d1a-b159-3651264fc6de-kube-api-access-dmxvm\") pod \"cert-manager-webhook-5655c58dd6-b8xr7\" (UID: \"c7f7bef8-1add-4d1a-b159-3651264fc6de\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.969929 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw84l\" (UniqueName: \"kubernetes.io/projected/9548e933-9e30-4c00-9713-84238a3a557d-kube-api-access-cw84l\") pod \"cert-manager-cainjector-7f985d654d-f4lf9\" (UID: \"9548e933-9e30-4c00-9713-84238a3a557d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-f4lf9" Dec 09 17:10:04 crc kubenswrapper[4853]: I1209 17:10:04.970753 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnqfm\" (UniqueName: \"kubernetes.io/projected/f37279b1-e5b2-4c60-a604-70931c3f028d-kube-api-access-xnqfm\") pod \"cert-manager-5b446d88c5-fdjh9\" (UID: \"f37279b1-e5b2-4c60-a604-70931c3f028d\") " pod="cert-manager/cert-manager-5b446d88c5-fdjh9" Dec 09 17:10:05 crc kubenswrapper[4853]: I1209 17:10:05.030012 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-f4lf9" Dec 09 17:10:05 crc kubenswrapper[4853]: I1209 17:10:05.054845 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-fdjh9" Dec 09 17:10:05 crc kubenswrapper[4853]: I1209 17:10:05.069672 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" Dec 09 17:10:05 crc kubenswrapper[4853]: I1209 17:10:05.496455 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-f4lf9"] Dec 09 17:10:05 crc kubenswrapper[4853]: W1209 17:10:05.505120 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9548e933_9e30_4c00_9713_84238a3a557d.slice/crio-7d77f5c0abb350e68efacb1a98e85d6a48bdf16b2242eb4ceaa3350886f4d7f0 WatchSource:0}: Error finding container 7d77f5c0abb350e68efacb1a98e85d6a48bdf16b2242eb4ceaa3350886f4d7f0: Status 404 returned error can't find the container with id 7d77f5c0abb350e68efacb1a98e85d6a48bdf16b2242eb4ceaa3350886f4d7f0 Dec 09 17:10:05 crc kubenswrapper[4853]: I1209 17:10:05.587331 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-fdjh9"] Dec 09 17:10:05 crc kubenswrapper[4853]: W1209 17:10:05.590734 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf37279b1_e5b2_4c60_a604_70931c3f028d.slice/crio-6254401b57e6faa68c945d7f17d5cb1ecb5d783d13bf4973d81046b0dc46d44d WatchSource:0}: Error finding container 6254401b57e6faa68c945d7f17d5cb1ecb5d783d13bf4973d81046b0dc46d44d: Status 404 returned error can't find the container with id 6254401b57e6faa68c945d7f17d5cb1ecb5d783d13bf4973d81046b0dc46d44d Dec 09 17:10:05 crc kubenswrapper[4853]: W1209 17:10:05.628942 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7f7bef8_1add_4d1a_b159_3651264fc6de.slice/crio-3530f575c4a8adbb778613d4ad10d307f4ce84ed3add56437d83c1e4c026dfc2 WatchSource:0}: Error finding container 3530f575c4a8adbb778613d4ad10d307f4ce84ed3add56437d83c1e4c026dfc2: Status 404 returned error can't find the container with id 3530f575c4a8adbb778613d4ad10d307f4ce84ed3add56437d83c1e4c026dfc2 Dec 09 17:10:05 crc kubenswrapper[4853]: I1209 17:10:05.632280 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-b8xr7"] Dec 09 17:10:05 crc kubenswrapper[4853]: I1209 17:10:05.926535 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-fdjh9" event={"ID":"f37279b1-e5b2-4c60-a604-70931c3f028d","Type":"ContainerStarted","Data":"6254401b57e6faa68c945d7f17d5cb1ecb5d783d13bf4973d81046b0dc46d44d"} Dec 09 17:10:05 crc kubenswrapper[4853]: I1209 17:10:05.927671 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-f4lf9" event={"ID":"9548e933-9e30-4c00-9713-84238a3a557d","Type":"ContainerStarted","Data":"7d77f5c0abb350e68efacb1a98e85d6a48bdf16b2242eb4ceaa3350886f4d7f0"} Dec 09 17:10:05 crc kubenswrapper[4853]: I1209 17:10:05.928717 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" event={"ID":"c7f7bef8-1add-4d1a-b159-3651264fc6de","Type":"ContainerStarted","Data":"3530f575c4a8adbb778613d4ad10d307f4ce84ed3add56437d83c1e4c026dfc2"} Dec 09 17:10:08 crc kubenswrapper[4853]: I1209 17:10:08.954527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" event={"ID":"c7f7bef8-1add-4d1a-b159-3651264fc6de","Type":"ContainerStarted","Data":"db8ed417cdeee11f2d1b058ff89da196617eca623a9b05dac84b229d4bc496fe"} Dec 09 17:10:08 crc kubenswrapper[4853]: I1209 17:10:08.955411 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" Dec 09 17:10:08 crc kubenswrapper[4853]: I1209 17:10:08.974277 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" podStartSLOduration=1.810033325 podStartE2EDuration="4.974208077s" podCreationTimestamp="2025-12-09 17:10:04 +0000 UTC" firstStartedPulling="2025-12-09 17:10:05.630937899 +0000 UTC m=+832.565677081" lastFinishedPulling="2025-12-09 17:10:08.795112631 +0000 UTC m=+835.729851833" observedRunningTime="2025-12-09 17:10:08.970328549 +0000 UTC m=+835.905067771" watchObservedRunningTime="2025-12-09 17:10:08.974208077 +0000 UTC m=+835.908947299" Dec 09 17:10:09 crc kubenswrapper[4853]: I1209 17:10:09.962731 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-fdjh9" event={"ID":"f37279b1-e5b2-4c60-a604-70931c3f028d","Type":"ContainerStarted","Data":"554dabee3539ae2afbe430d6a974efc5ce6d71fcc573db406211674648d491b9"} Dec 09 17:10:09 crc kubenswrapper[4853]: I1209 17:10:09.965337 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-f4lf9" event={"ID":"9548e933-9e30-4c00-9713-84238a3a557d","Type":"ContainerStarted","Data":"345f20db936a889ff263ed18923acad7979e6225bc3b49d0ae2d14dcf96185f3"} Dec 09 17:10:09 crc kubenswrapper[4853]: I1209 17:10:09.980448 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-fdjh9" podStartSLOduration=2.690458892 podStartE2EDuration="5.980425473s" podCreationTimestamp="2025-12-09 17:10:04 +0000 UTC" firstStartedPulling="2025-12-09 17:10:05.592006928 +0000 UTC m=+832.526746110" lastFinishedPulling="2025-12-09 17:10:08.881973509 +0000 UTC m=+835.816712691" observedRunningTime="2025-12-09 17:10:09.976624337 +0000 UTC m=+836.911363519" watchObservedRunningTime="2025-12-09 17:10:09.980425473 +0000 UTC m=+836.915164655" Dec 09 17:10:11 crc kubenswrapper[4853]: I1209 17:10:11.591748 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-f4lf9" podStartSLOduration=4.270921862 podStartE2EDuration="7.591728579s" podCreationTimestamp="2025-12-09 17:10:04 +0000 UTC" firstStartedPulling="2025-12-09 17:10:05.507219908 +0000 UTC m=+832.441959090" lastFinishedPulling="2025-12-09 17:10:08.828026605 +0000 UTC m=+835.762765807" observedRunningTime="2025-12-09 17:10:10.006958658 +0000 UTC m=+836.941697840" watchObservedRunningTime="2025-12-09 17:10:11.591728579 +0000 UTC m=+838.526467771" Dec 09 17:10:11 crc kubenswrapper[4853]: I1209 17:10:11.978586 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" event={"ID":"eb455954-ec70-4aa6-bbf1-39354677512b","Type":"ContainerStarted","Data":"4834dc4dd6ac71421a2a2f54aea92db975e3282a939b129ee0a512d10dfa564f"} Dec 09 17:10:12 crc kubenswrapper[4853]: I1209 17:10:12.000563 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn" podStartSLOduration=-9223371987.854229 podStartE2EDuration="49.000546161s" podCreationTimestamp="2025-12-09 17:09:23 +0000 UTC" firstStartedPulling="2025-12-09 17:09:43.048123813 +0000 UTC m=+809.982862995" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:10:11.996515608 +0000 UTC m=+838.931254800" watchObservedRunningTime="2025-12-09 17:10:12.000546161 +0000 UTC m=+838.935285343" Dec 09 17:10:12 crc kubenswrapper[4853]: I1209 17:10:12.985843 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-wclnd" event={"ID":"37fdfb11-d235-458b-8963-bf7cb3a9b589","Type":"ContainerStarted","Data":"84b14266c57591cecc91859437ee0156d48a5a87aa7a58dfba9cc60feb84dcfc"} Dec 09 17:10:12 crc kubenswrapper[4853]: I1209 17:10:12.986397 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:10:13 crc kubenswrapper[4853]: I1209 17:10:13.017089 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-wclnd" podStartSLOduration=18.508305434 podStartE2EDuration="49.017063636s" podCreationTimestamp="2025-12-09 17:09:24 +0000 UTC" firstStartedPulling="2025-12-09 17:09:42.064550302 +0000 UTC m=+808.999289484" lastFinishedPulling="2025-12-09 17:10:12.573308504 +0000 UTC m=+839.508047686" observedRunningTime="2025-12-09 17:10:13.013335222 +0000 UTC m=+839.948074414" watchObservedRunningTime="2025-12-09 17:10:13.017063636 +0000 UTC m=+839.951802858" Dec 09 17:10:14 crc kubenswrapper[4853]: I1209 17:10:14.869072 4853 scope.go:117] "RemoveContainer" containerID="2720b4881f8e39f1ae885932ce0bd2b0d4cef44e27affe4b29555868b491ff44" Dec 09 17:10:14 crc kubenswrapper[4853]: I1209 17:10:14.886835 4853 scope.go:117] "RemoveContainer" containerID="050637d55adbca25a45607a6298fcd87967e342ce31bbbc623bdf1a94f026540" Dec 09 17:10:14 crc kubenswrapper[4853]: I1209 17:10:14.909686 4853 scope.go:117] "RemoveContainer" containerID="507980d98ddb2b0da1d57c39f0786848bad044537478316a247f8a4f48fdcdc5" Dec 09 17:10:14 crc kubenswrapper[4853]: I1209 17:10:14.928808 4853 scope.go:117] "RemoveContainer" containerID="ded4b00097a5baaf4ba3f61138abead2b1a03eeccceab4606ec702007732e31e" Dec 09 17:10:14 crc kubenswrapper[4853]: I1209 17:10:14.947499 4853 scope.go:117] "RemoveContainer" containerID="b99eb1b82329a337d674e1cf8eb768dce4598b47fc49dc1886073a0185e55191" Dec 09 17:10:14 crc kubenswrapper[4853]: I1209 17:10:14.962942 4853 scope.go:117] "RemoveContainer" containerID="8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22" Dec 09 17:10:15 crc kubenswrapper[4853]: I1209 17:10:15.005000 4853 scope.go:117] "RemoveContainer" containerID="a92a8a19c3e2580d920afae875b4ecc11d3e9a4ffd4ca2785a7546ad45b78b4d" Dec 09 17:10:15 crc kubenswrapper[4853]: E1209 17:10:15.006156 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22\": container with ID starting with 8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22 not found: ID does not exist" containerID="8ef1970b4e3aee02b1f5e29e43779b4c707fff41f2526ae9416e33d1f4b9ab22" Dec 09 17:10:15 crc kubenswrapper[4853]: I1209 17:10:15.035073 4853 scope.go:117] "RemoveContainer" containerID="46a1830f57dacff194f3e94fb6eb8c575d6547ba91f6d1c9ee0b0b7610ef7ad6" Dec 09 17:10:15 crc kubenswrapper[4853]: I1209 17:10:15.074102 4853 scope.go:117] "RemoveContainer" containerID="00950869d232a77f391f992f538afce780d558a11d80abf96de784c00ee18dd7" Dec 09 17:10:15 crc kubenswrapper[4853]: I1209 17:10:15.078667 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-b8xr7" Dec 09 17:10:24 crc kubenswrapper[4853]: I1209 17:10:24.680271 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-wclnd" Dec 09 17:10:28 crc kubenswrapper[4853]: I1209 17:10:28.592862 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:10:28 crc kubenswrapper[4853]: I1209 17:10:28.593455 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:10:28 crc kubenswrapper[4853]: I1209 17:10:28.593529 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:10:28 crc kubenswrapper[4853]: I1209 17:10:28.594579 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a118dfb72dd6cb3a09eba31a0c1c9fb6b48c60ceaaef13411d332f6c915d49b"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:10:28 crc kubenswrapper[4853]: I1209 17:10:28.594681 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://8a118dfb72dd6cb3a09eba31a0c1c9fb6b48c60ceaaef13411d332f6c915d49b" gracePeriod=600 Dec 09 17:10:31 crc kubenswrapper[4853]: I1209 17:10:31.118132 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="8a118dfb72dd6cb3a09eba31a0c1c9fb6b48c60ceaaef13411d332f6c915d49b" exitCode=0 Dec 09 17:10:31 crc kubenswrapper[4853]: I1209 17:10:31.118204 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"8a118dfb72dd6cb3a09eba31a0c1c9fb6b48c60ceaaef13411d332f6c915d49b"} Dec 09 17:10:31 crc kubenswrapper[4853]: I1209 17:10:31.118690 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"0f1e13e4d459d808e60b6045bc9ee20eb1af70f1e10bf78d11a94776b32f36e9"} Dec 09 17:10:31 crc kubenswrapper[4853]: I1209 17:10:31.118713 4853 scope.go:117] "RemoveContainer" containerID="6c6b3bde6cd549f0e31cb60caaff7dda9f88378c80c10391725bd8667fec2086" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.571271 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn"] Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.573206 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.576579 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.589972 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn"] Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.642532 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.642861 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.642987 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnfdm\" (UniqueName: \"kubernetes.io/projected/f04547bf-9d5c-4408-a107-bbe92020eb73-kube-api-access-fnfdm\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.743958 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.744016 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnfdm\" (UniqueName: \"kubernetes.io/projected/f04547bf-9d5c-4408-a107-bbe92020eb73-kube-api-access-fnfdm\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.744050 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.744582 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.744746 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.770307 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnfdm\" (UniqueName: \"kubernetes.io/projected/f04547bf-9d5c-4408-a107-bbe92020eb73-kube-api-access-fnfdm\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.899478 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.964119 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv"] Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.965766 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:38 crc kubenswrapper[4853]: I1209 17:10:38.972982 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv"] Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.048902 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.049152 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.049236 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/512ebe87-626f-4880-b7d5-20d61f740a8b-kube-api-access-mqdkx\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.150037 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.150096 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.150181 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/512ebe87-626f-4880-b7d5-20d61f740a8b-kube-api-access-mqdkx\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.150703 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.150790 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.178515 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/512ebe87-626f-4880-b7d5-20d61f740a8b-kube-api-access-mqdkx\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.291320 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.416340 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn"] Dec 09 17:10:39 crc kubenswrapper[4853]: I1209 17:10:39.740575 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv"] Dec 09 17:10:39 crc kubenswrapper[4853]: W1209 17:10:39.746663 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod512ebe87_626f_4880_b7d5_20d61f740a8b.slice/crio-af982a593842207cc6f2cb05a447082daf122bc6922318a19a2e26203b2786a9 WatchSource:0}: Error finding container af982a593842207cc6f2cb05a447082daf122bc6922318a19a2e26203b2786a9: Status 404 returned error can't find the container with id af982a593842207cc6f2cb05a447082daf122bc6922318a19a2e26203b2786a9 Dec 09 17:10:40 crc kubenswrapper[4853]: I1209 17:10:40.184820 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" event={"ID":"f04547bf-9d5c-4408-a107-bbe92020eb73","Type":"ContainerStarted","Data":"b70c68761b6dbd27d8269a4c6b78578119d8c80aa8a2be089e4043350470e11d"} Dec 09 17:10:40 crc kubenswrapper[4853]: I1209 17:10:40.186240 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" event={"ID":"512ebe87-626f-4880-b7d5-20d61f740a8b","Type":"ContainerStarted","Data":"af982a593842207cc6f2cb05a447082daf122bc6922318a19a2e26203b2786a9"} Dec 09 17:10:41 crc kubenswrapper[4853]: I1209 17:10:41.195920 4853 generic.go:334] "Generic (PLEG): container finished" podID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerID="0d0ba985607b388362451441233c877a335022e5b1792cb41ed14b0c80e854be" exitCode=0 Dec 09 17:10:41 crc kubenswrapper[4853]: I1209 17:10:41.196039 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" event={"ID":"f04547bf-9d5c-4408-a107-bbe92020eb73","Type":"ContainerDied","Data":"0d0ba985607b388362451441233c877a335022e5b1792cb41ed14b0c80e854be"} Dec 09 17:10:41 crc kubenswrapper[4853]: I1209 17:10:41.198076 4853 generic.go:334] "Generic (PLEG): container finished" podID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerID="fd1ac63b185e4b38548f758d611debdf50474c1a99debcc340304423b7ef5a50" exitCode=0 Dec 09 17:10:41 crc kubenswrapper[4853]: I1209 17:10:41.198117 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" event={"ID":"512ebe87-626f-4880-b7d5-20d61f740a8b","Type":"ContainerDied","Data":"fd1ac63b185e4b38548f758d611debdf50474c1a99debcc340304423b7ef5a50"} Dec 09 17:10:43 crc kubenswrapper[4853]: E1209 17:10:43.040634 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf04547bf_9d5c_4408_a107_bbe92020eb73.slice/crio-conmon-259a1541633339b0d528cce609968211fbe9cbd34e013fee992dc696da2ecf9c.scope\": RecentStats: unable to find data in memory cache]" Dec 09 17:10:43 crc kubenswrapper[4853]: I1209 17:10:43.214655 4853 generic.go:334] "Generic (PLEG): container finished" podID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerID="80af67710a837ef9183c97d1bc374c8767297b6913556d856ac9c3790083c170" exitCode=0 Dec 09 17:10:43 crc kubenswrapper[4853]: I1209 17:10:43.214716 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" event={"ID":"512ebe87-626f-4880-b7d5-20d61f740a8b","Type":"ContainerDied","Data":"80af67710a837ef9183c97d1bc374c8767297b6913556d856ac9c3790083c170"} Dec 09 17:10:43 crc kubenswrapper[4853]: I1209 17:10:43.218094 4853 generic.go:334] "Generic (PLEG): container finished" podID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerID="259a1541633339b0d528cce609968211fbe9cbd34e013fee992dc696da2ecf9c" exitCode=0 Dec 09 17:10:43 crc kubenswrapper[4853]: I1209 17:10:43.218131 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" event={"ID":"f04547bf-9d5c-4408-a107-bbe92020eb73","Type":"ContainerDied","Data":"259a1541633339b0d528cce609968211fbe9cbd34e013fee992dc696da2ecf9c"} Dec 09 17:10:44 crc kubenswrapper[4853]: I1209 17:10:44.226639 4853 generic.go:334] "Generic (PLEG): container finished" podID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerID="695704888edea176d135d6593d140173ee8f408592232145ba69327b30d27e5e" exitCode=0 Dec 09 17:10:44 crc kubenswrapper[4853]: I1209 17:10:44.226730 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" event={"ID":"f04547bf-9d5c-4408-a107-bbe92020eb73","Type":"ContainerDied","Data":"695704888edea176d135d6593d140173ee8f408592232145ba69327b30d27e5e"} Dec 09 17:10:44 crc kubenswrapper[4853]: I1209 17:10:44.231087 4853 generic.go:334] "Generic (PLEG): container finished" podID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerID="e76d5d4989444e0ab7458fb3d4a9722201d1401007ae0d9c7f050fcdec6e3054" exitCode=0 Dec 09 17:10:44 crc kubenswrapper[4853]: I1209 17:10:44.231135 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" event={"ID":"512ebe87-626f-4880-b7d5-20d61f740a8b","Type":"ContainerDied","Data":"e76d5d4989444e0ab7458fb3d4a9722201d1401007ae0d9c7f050fcdec6e3054"} Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.531896 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.541037 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.652882 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-util\") pod \"f04547bf-9d5c-4408-a107-bbe92020eb73\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.653097 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-bundle\") pod \"512ebe87-626f-4880-b7d5-20d61f740a8b\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.654630 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-bundle" (OuterVolumeSpecName: "bundle") pod "512ebe87-626f-4880-b7d5-20d61f740a8b" (UID: "512ebe87-626f-4880-b7d5-20d61f740a8b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.654856 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/512ebe87-626f-4880-b7d5-20d61f740a8b-kube-api-access-mqdkx\") pod \"512ebe87-626f-4880-b7d5-20d61f740a8b\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.654896 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-util\") pod \"512ebe87-626f-4880-b7d5-20d61f740a8b\" (UID: \"512ebe87-626f-4880-b7d5-20d61f740a8b\") " Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.656515 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-bundle\") pod \"f04547bf-9d5c-4408-a107-bbe92020eb73\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.656720 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnfdm\" (UniqueName: \"kubernetes.io/projected/f04547bf-9d5c-4408-a107-bbe92020eb73-kube-api-access-fnfdm\") pod \"f04547bf-9d5c-4408-a107-bbe92020eb73\" (UID: \"f04547bf-9d5c-4408-a107-bbe92020eb73\") " Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.657174 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.657738 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-bundle" (OuterVolumeSpecName: "bundle") pod "f04547bf-9d5c-4408-a107-bbe92020eb73" (UID: "f04547bf-9d5c-4408-a107-bbe92020eb73"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.662606 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512ebe87-626f-4880-b7d5-20d61f740a8b-kube-api-access-mqdkx" (OuterVolumeSpecName: "kube-api-access-mqdkx") pod "512ebe87-626f-4880-b7d5-20d61f740a8b" (UID: "512ebe87-626f-4880-b7d5-20d61f740a8b"). InnerVolumeSpecName "kube-api-access-mqdkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.663163 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04547bf-9d5c-4408-a107-bbe92020eb73-kube-api-access-fnfdm" (OuterVolumeSpecName: "kube-api-access-fnfdm") pod "f04547bf-9d5c-4408-a107-bbe92020eb73" (UID: "f04547bf-9d5c-4408-a107-bbe92020eb73"). InnerVolumeSpecName "kube-api-access-fnfdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.667268 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-util" (OuterVolumeSpecName: "util") pod "f04547bf-9d5c-4408-a107-bbe92020eb73" (UID: "f04547bf-9d5c-4408-a107-bbe92020eb73"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.670138 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-util" (OuterVolumeSpecName: "util") pod "512ebe87-626f-4880-b7d5-20d61f740a8b" (UID: "512ebe87-626f-4880-b7d5-20d61f740a8b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.758939 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.758970 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnfdm\" (UniqueName: \"kubernetes.io/projected/f04547bf-9d5c-4408-a107-bbe92020eb73-kube-api-access-fnfdm\") on node \"crc\" DevicePath \"\"" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.758982 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04547bf-9d5c-4408-a107-bbe92020eb73-util\") on node \"crc\" DevicePath \"\"" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.758992 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/512ebe87-626f-4880-b7d5-20d61f740a8b-kube-api-access-mqdkx\") on node \"crc\" DevicePath \"\"" Dec 09 17:10:45 crc kubenswrapper[4853]: I1209 17:10:45.759006 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/512ebe87-626f-4880-b7d5-20d61f740a8b-util\") on node \"crc\" DevicePath \"\"" Dec 09 17:10:46 crc kubenswrapper[4853]: I1209 17:10:46.245802 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" Dec 09 17:10:46 crc kubenswrapper[4853]: I1209 17:10:46.245820 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn" event={"ID":"f04547bf-9d5c-4408-a107-bbe92020eb73","Type":"ContainerDied","Data":"b70c68761b6dbd27d8269a4c6b78578119d8c80aa8a2be089e4043350470e11d"} Dec 09 17:10:46 crc kubenswrapper[4853]: I1209 17:10:46.246264 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b70c68761b6dbd27d8269a4c6b78578119d8c80aa8a2be089e4043350470e11d" Dec 09 17:10:46 crc kubenswrapper[4853]: I1209 17:10:46.248475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" event={"ID":"512ebe87-626f-4880-b7d5-20d61f740a8b","Type":"ContainerDied","Data":"af982a593842207cc6f2cb05a447082daf122bc6922318a19a2e26203b2786a9"} Dec 09 17:10:46 crc kubenswrapper[4853]: I1209 17:10:46.248520 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af982a593842207cc6f2cb05a447082daf122bc6922318a19a2e26203b2786a9" Dec 09 17:10:46 crc kubenswrapper[4853]: I1209 17:10:46.248530 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.251836 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l"] Dec 09 17:10:54 crc kubenswrapper[4853]: E1209 17:10:54.252660 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerName="util" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.252673 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerName="util" Dec 09 17:10:54 crc kubenswrapper[4853]: E1209 17:10:54.252691 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerName="pull" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.252697 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerName="pull" Dec 09 17:10:54 crc kubenswrapper[4853]: E1209 17:10:54.252707 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerName="extract" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.252713 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerName="extract" Dec 09 17:10:54 crc kubenswrapper[4853]: E1209 17:10:54.252728 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerName="pull" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.252736 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerName="pull" Dec 09 17:10:54 crc kubenswrapper[4853]: E1209 17:10:54.252747 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerName="extract" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.252754 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerName="extract" Dec 09 17:10:54 crc kubenswrapper[4853]: E1209 17:10:54.252764 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerName="util" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.252770 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerName="util" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.252886 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="512ebe87-626f-4880-b7d5-20d61f740a8b" containerName="extract" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.252903 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04547bf-9d5c-4408-a107-bbe92020eb73" containerName="extract" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.253631 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.255670 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.256872 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.256983 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.257036 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.257062 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.257277 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-d5c8q" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.268764 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l"] Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.375986 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/39cc3872-b69b-4be6-8e95-bfa0fa931045-manager-config\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.376039 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39cc3872-b69b-4be6-8e95-bfa0fa931045-webhook-cert\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.376075 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/39cc3872-b69b-4be6-8e95-bfa0fa931045-apiservice-cert\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.376116 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39cc3872-b69b-4be6-8e95-bfa0fa931045-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.376153 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll8gk\" (UniqueName: \"kubernetes.io/projected/39cc3872-b69b-4be6-8e95-bfa0fa931045-kube-api-access-ll8gk\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.477778 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll8gk\" (UniqueName: \"kubernetes.io/projected/39cc3872-b69b-4be6-8e95-bfa0fa931045-kube-api-access-ll8gk\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.478029 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/39cc3872-b69b-4be6-8e95-bfa0fa931045-manager-config\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.478063 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39cc3872-b69b-4be6-8e95-bfa0fa931045-webhook-cert\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.478097 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/39cc3872-b69b-4be6-8e95-bfa0fa931045-apiservice-cert\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.478124 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39cc3872-b69b-4be6-8e95-bfa0fa931045-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.478992 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/39cc3872-b69b-4be6-8e95-bfa0fa931045-manager-config\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.491692 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39cc3872-b69b-4be6-8e95-bfa0fa931045-webhook-cert\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.492365 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39cc3872-b69b-4be6-8e95-bfa0fa931045-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.499197 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll8gk\" (UniqueName: \"kubernetes.io/projected/39cc3872-b69b-4be6-8e95-bfa0fa931045-kube-api-access-ll8gk\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.500925 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/39cc3872-b69b-4be6-8e95-bfa0fa931045-apiservice-cert\") pod \"loki-operator-controller-manager-67997bf5ff-6pn4l\" (UID: \"39cc3872-b69b-4be6-8e95-bfa0fa931045\") " pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.572368 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:10:54 crc kubenswrapper[4853]: I1209 17:10:54.989193 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l"] Dec 09 17:10:54 crc kubenswrapper[4853]: W1209 17:10:54.999573 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39cc3872_b69b_4be6_8e95_bfa0fa931045.slice/crio-240ce8cdf01d420e9a52a09b41c5063aac7665ac275d2e4315089f946d8e799a WatchSource:0}: Error finding container 240ce8cdf01d420e9a52a09b41c5063aac7665ac275d2e4315089f946d8e799a: Status 404 returned error can't find the container with id 240ce8cdf01d420e9a52a09b41c5063aac7665ac275d2e4315089f946d8e799a Dec 09 17:10:55 crc kubenswrapper[4853]: I1209 17:10:55.304744 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" event={"ID":"39cc3872-b69b-4be6-8e95-bfa0fa931045","Type":"ContainerStarted","Data":"240ce8cdf01d420e9a52a09b41c5063aac7665ac275d2e4315089f946d8e799a"} Dec 09 17:10:58 crc kubenswrapper[4853]: I1209 17:10:58.779786 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-lbcsr"] Dec 09 17:10:58 crc kubenswrapper[4853]: I1209 17:10:58.782500 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-lbcsr" Dec 09 17:10:58 crc kubenswrapper[4853]: I1209 17:10:58.783925 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-q86n7" Dec 09 17:10:58 crc kubenswrapper[4853]: I1209 17:10:58.786370 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Dec 09 17:10:58 crc kubenswrapper[4853]: I1209 17:10:58.786378 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Dec 09 17:10:58 crc kubenswrapper[4853]: I1209 17:10:58.795560 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-lbcsr"] Dec 09 17:10:58 crc kubenswrapper[4853]: I1209 17:10:58.943744 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfs5b\" (UniqueName: \"kubernetes.io/projected/8bd4f09f-79cb-4e4f-b2c0-aeb79f65cb66-kube-api-access-nfs5b\") pod \"cluster-logging-operator-ff9846bd-lbcsr\" (UID: \"8bd4f09f-79cb-4e4f-b2c0-aeb79f65cb66\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-lbcsr" Dec 09 17:10:59 crc kubenswrapper[4853]: I1209 17:10:59.044799 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfs5b\" (UniqueName: \"kubernetes.io/projected/8bd4f09f-79cb-4e4f-b2c0-aeb79f65cb66-kube-api-access-nfs5b\") pod \"cluster-logging-operator-ff9846bd-lbcsr\" (UID: \"8bd4f09f-79cb-4e4f-b2c0-aeb79f65cb66\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-lbcsr" Dec 09 17:10:59 crc kubenswrapper[4853]: I1209 17:10:59.065257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfs5b\" (UniqueName: \"kubernetes.io/projected/8bd4f09f-79cb-4e4f-b2c0-aeb79f65cb66-kube-api-access-nfs5b\") pod \"cluster-logging-operator-ff9846bd-lbcsr\" (UID: \"8bd4f09f-79cb-4e4f-b2c0-aeb79f65cb66\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-lbcsr" Dec 09 17:10:59 crc kubenswrapper[4853]: I1209 17:10:59.097921 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-lbcsr" Dec 09 17:11:00 crc kubenswrapper[4853]: I1209 17:11:00.972183 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-lbcsr"] Dec 09 17:11:00 crc kubenswrapper[4853]: W1209 17:11:00.978719 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bd4f09f_79cb_4e4f_b2c0_aeb79f65cb66.slice/crio-a3c7946f3c9c87676d91884249849524305d2330f6756c75598477ac7c9ff17b WatchSource:0}: Error finding container a3c7946f3c9c87676d91884249849524305d2330f6756c75598477ac7c9ff17b: Status 404 returned error can't find the container with id a3c7946f3c9c87676d91884249849524305d2330f6756c75598477ac7c9ff17b Dec 09 17:11:01 crc kubenswrapper[4853]: I1209 17:11:01.356861 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-lbcsr" event={"ID":"8bd4f09f-79cb-4e4f-b2c0-aeb79f65cb66","Type":"ContainerStarted","Data":"a3c7946f3c9c87676d91884249849524305d2330f6756c75598477ac7c9ff17b"} Dec 09 17:11:01 crc kubenswrapper[4853]: I1209 17:11:01.358977 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" event={"ID":"39cc3872-b69b-4be6-8e95-bfa0fa931045","Type":"ContainerStarted","Data":"623954972cb387a959d8fe659000271dc072ae1c182b3a74e4b0bc1abaf702c4"} Dec 09 17:11:09 crc kubenswrapper[4853]: I1209 17:11:09.438705 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" event={"ID":"39cc3872-b69b-4be6-8e95-bfa0fa931045","Type":"ContainerStarted","Data":"eb9dd7f2d27180423943a61fb074132288fd282830df3018a725c5bec6849f9a"} Dec 09 17:11:09 crc kubenswrapper[4853]: I1209 17:11:09.439674 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:11:09 crc kubenswrapper[4853]: I1209 17:11:09.443747 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" Dec 09 17:11:09 crc kubenswrapper[4853]: I1209 17:11:09.482725 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-67997bf5ff-6pn4l" podStartSLOduration=2.098972231 podStartE2EDuration="15.482700868s" podCreationTimestamp="2025-12-09 17:10:54 +0000 UTC" firstStartedPulling="2025-12-09 17:10:55.001623889 +0000 UTC m=+881.936363071" lastFinishedPulling="2025-12-09 17:11:08.385352526 +0000 UTC m=+895.320091708" observedRunningTime="2025-12-09 17:11:09.480171757 +0000 UTC m=+896.414910939" watchObservedRunningTime="2025-12-09 17:11:09.482700868 +0000 UTC m=+896.417440050" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.334232 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q4vt7"] Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.336738 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.345309 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4vt7"] Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.453628 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-catalog-content\") pod \"redhat-marketplace-q4vt7\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.453684 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvg2d\" (UniqueName: \"kubernetes.io/projected/9381fb79-81a2-477f-b416-aef4fdff3d46-kube-api-access-gvg2d\") pod \"redhat-marketplace-q4vt7\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.453769 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-utilities\") pod \"redhat-marketplace-q4vt7\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.555159 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-utilities\") pod \"redhat-marketplace-q4vt7\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.555281 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-catalog-content\") pod \"redhat-marketplace-q4vt7\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.555308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvg2d\" (UniqueName: \"kubernetes.io/projected/9381fb79-81a2-477f-b416-aef4fdff3d46-kube-api-access-gvg2d\") pod \"redhat-marketplace-q4vt7\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.559271 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-utilities\") pod \"redhat-marketplace-q4vt7\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.559571 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-catalog-content\") pod \"redhat-marketplace-q4vt7\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.583420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvg2d\" (UniqueName: \"kubernetes.io/projected/9381fb79-81a2-477f-b416-aef4fdff3d46-kube-api-access-gvg2d\") pod \"redhat-marketplace-q4vt7\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:11 crc kubenswrapper[4853]: I1209 17:11:11.663674 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:12 crc kubenswrapper[4853]: I1209 17:11:12.496523 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4vt7"] Dec 09 17:11:13 crc kubenswrapper[4853]: I1209 17:11:13.463457 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-lbcsr" event={"ID":"8bd4f09f-79cb-4e4f-b2c0-aeb79f65cb66","Type":"ContainerStarted","Data":"32ae29766551c5e7f60dea779a03752fb8333888078f70be2833779176b34aed"} Dec 09 17:11:13 crc kubenswrapper[4853]: I1209 17:11:13.466692 4853 generic.go:334] "Generic (PLEG): container finished" podID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerID="7fc2e5c6fd56ef7ca3b382d4124f6785233b982ab293989b4235a9991ded4d79" exitCode=0 Dec 09 17:11:13 crc kubenswrapper[4853]: I1209 17:11:13.466728 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4vt7" event={"ID":"9381fb79-81a2-477f-b416-aef4fdff3d46","Type":"ContainerDied","Data":"7fc2e5c6fd56ef7ca3b382d4124f6785233b982ab293989b4235a9991ded4d79"} Dec 09 17:11:13 crc kubenswrapper[4853]: I1209 17:11:13.466748 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4vt7" event={"ID":"9381fb79-81a2-477f-b416-aef4fdff3d46","Type":"ContainerStarted","Data":"b2a995d91f2506904fb03651858cc09e14f0ef2647165e52ce71fa1489f7de93"} Dec 09 17:11:13 crc kubenswrapper[4853]: I1209 17:11:13.519853 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-lbcsr" podStartSLOduration=4.353256683 podStartE2EDuration="15.519832326s" podCreationTimestamp="2025-12-09 17:10:58 +0000 UTC" firstStartedPulling="2025-12-09 17:11:00.981616983 +0000 UTC m=+887.916356165" lastFinishedPulling="2025-12-09 17:11:12.148192616 +0000 UTC m=+899.082931808" observedRunningTime="2025-12-09 17:11:13.517354606 +0000 UTC m=+900.452093798" watchObservedRunningTime="2025-12-09 17:11:13.519832326 +0000 UTC m=+900.454571508" Dec 09 17:11:14 crc kubenswrapper[4853]: I1209 17:11:14.498923 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4vt7" event={"ID":"9381fb79-81a2-477f-b416-aef4fdff3d46","Type":"ContainerStarted","Data":"19d8074fff8afdfbb3297e2b8a462b50ad24006638151d5cd2778da6f7b8eddd"} Dec 09 17:11:15 crc kubenswrapper[4853]: I1209 17:11:15.507809 4853 generic.go:334] "Generic (PLEG): container finished" podID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerID="19d8074fff8afdfbb3297e2b8a462b50ad24006638151d5cd2778da6f7b8eddd" exitCode=0 Dec 09 17:11:15 crc kubenswrapper[4853]: I1209 17:11:15.507857 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4vt7" event={"ID":"9381fb79-81a2-477f-b416-aef4fdff3d46","Type":"ContainerDied","Data":"19d8074fff8afdfbb3297e2b8a462b50ad24006638151d5cd2778da6f7b8eddd"} Dec 09 17:11:18 crc kubenswrapper[4853]: I1209 17:11:18.528883 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4vt7" event={"ID":"9381fb79-81a2-477f-b416-aef4fdff3d46","Type":"ContainerStarted","Data":"8653791efe6a093d30316a72bf836282ceca095f97209452ec60d3a9efaaa480"} Dec 09 17:11:18 crc kubenswrapper[4853]: I1209 17:11:18.552395 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q4vt7" podStartSLOduration=3.5976443700000003 podStartE2EDuration="7.552374028s" podCreationTimestamp="2025-12-09 17:11:11 +0000 UTC" firstStartedPulling="2025-12-09 17:11:13.468315011 +0000 UTC m=+900.403054193" lastFinishedPulling="2025-12-09 17:11:17.423044669 +0000 UTC m=+904.357783851" observedRunningTime="2025-12-09 17:11:18.543559371 +0000 UTC m=+905.478298563" watchObservedRunningTime="2025-12-09 17:11:18.552374028 +0000 UTC m=+905.487113210" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.034938 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.036107 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.039169 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.039372 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.043122 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.067433 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-488922b4-30a0-4af4-a5cf-af25cadbab95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-488922b4-30a0-4af4-a5cf-af25cadbab95\") pod \"minio\" (UID: \"d076d1f4-7f6e-4dce-b1fd-26e040d90f58\") " pod="minio-dev/minio" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.067495 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg48p\" (UniqueName: \"kubernetes.io/projected/d076d1f4-7f6e-4dce-b1fd-26e040d90f58-kube-api-access-sg48p\") pod \"minio\" (UID: \"d076d1f4-7f6e-4dce-b1fd-26e040d90f58\") " pod="minio-dev/minio" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.169288 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-488922b4-30a0-4af4-a5cf-af25cadbab95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-488922b4-30a0-4af4-a5cf-af25cadbab95\") pod \"minio\" (UID: \"d076d1f4-7f6e-4dce-b1fd-26e040d90f58\") " pod="minio-dev/minio" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.169345 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg48p\" (UniqueName: \"kubernetes.io/projected/d076d1f4-7f6e-4dce-b1fd-26e040d90f58-kube-api-access-sg48p\") pod \"minio\" (UID: \"d076d1f4-7f6e-4dce-b1fd-26e040d90f58\") " pod="minio-dev/minio" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.175174 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.175248 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-488922b4-30a0-4af4-a5cf-af25cadbab95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-488922b4-30a0-4af4-a5cf-af25cadbab95\") pod \"minio\" (UID: \"d076d1f4-7f6e-4dce-b1fd-26e040d90f58\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/89e3692f2630a54770273274e10ddf4b84b5bc281745da1dce636067ff35deb7/globalmount\"" pod="minio-dev/minio" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.196647 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg48p\" (UniqueName: \"kubernetes.io/projected/d076d1f4-7f6e-4dce-b1fd-26e040d90f58-kube-api-access-sg48p\") pod \"minio\" (UID: \"d076d1f4-7f6e-4dce-b1fd-26e040d90f58\") " pod="minio-dev/minio" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.210230 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-488922b4-30a0-4af4-a5cf-af25cadbab95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-488922b4-30a0-4af4-a5cf-af25cadbab95\") pod \"minio\" (UID: \"d076d1f4-7f6e-4dce-b1fd-26e040d90f58\") " pod="minio-dev/minio" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.360707 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 09 17:11:19 crc kubenswrapper[4853]: I1209 17:11:19.758963 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 09 17:11:20 crc kubenswrapper[4853]: I1209 17:11:20.543410 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"d076d1f4-7f6e-4dce-b1fd-26e040d90f58","Type":"ContainerStarted","Data":"56add890a508b2b37e88e573001ce92c6a63e3d43a05d1330778a12ca4416f95"} Dec 09 17:11:21 crc kubenswrapper[4853]: I1209 17:11:21.663892 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:21 crc kubenswrapper[4853]: I1209 17:11:21.664012 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:21 crc kubenswrapper[4853]: I1209 17:11:21.737612 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:22 crc kubenswrapper[4853]: I1209 17:11:22.629909 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:23 crc kubenswrapper[4853]: I1209 17:11:23.577639 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"d076d1f4-7f6e-4dce-b1fd-26e040d90f58","Type":"ContainerStarted","Data":"842ba079d03bcb2235d3fee71a32d6eba6faa8f989eb92d84311e3930cdf31b9"} Dec 09 17:11:23 crc kubenswrapper[4853]: I1209 17:11:23.604574 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.319984013 podStartE2EDuration="7.60455802s" podCreationTimestamp="2025-12-09 17:11:16 +0000 UTC" firstStartedPulling="2025-12-09 17:11:19.765480914 +0000 UTC m=+906.700220116" lastFinishedPulling="2025-12-09 17:11:23.050054951 +0000 UTC m=+909.984794123" observedRunningTime="2025-12-09 17:11:23.602942056 +0000 UTC m=+910.537681238" watchObservedRunningTime="2025-12-09 17:11:23.60455802 +0000 UTC m=+910.539297202" Dec 09 17:11:24 crc kubenswrapper[4853]: I1209 17:11:24.112195 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4vt7"] Dec 09 17:11:25 crc kubenswrapper[4853]: I1209 17:11:25.587202 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q4vt7" podUID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerName="registry-server" containerID="cri-o://8653791efe6a093d30316a72bf836282ceca095f97209452ec60d3a9efaaa480" gracePeriod=2 Dec 09 17:11:26 crc kubenswrapper[4853]: I1209 17:11:26.616504 4853 generic.go:334] "Generic (PLEG): container finished" podID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerID="8653791efe6a093d30316a72bf836282ceca095f97209452ec60d3a9efaaa480" exitCode=0 Dec 09 17:11:26 crc kubenswrapper[4853]: I1209 17:11:26.616561 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4vt7" event={"ID":"9381fb79-81a2-477f-b416-aef4fdff3d46","Type":"ContainerDied","Data":"8653791efe6a093d30316a72bf836282ceca095f97209452ec60d3a9efaaa480"} Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.314111 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-p22sf"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.315208 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.318725 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.319958 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-5vgvx" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.320000 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.320175 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.320305 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.330626 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-p22sf"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.432503 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.432570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.432625 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gmcc\" (UniqueName: \"kubernetes.io/projected/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-kube-api-access-2gmcc\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.432686 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.432726 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-config\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.488706 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-td5fh"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.489722 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.491303 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.491534 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.493084 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.497948 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-td5fh"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.534672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.534729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.534764 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gmcc\" (UniqueName: \"kubernetes.io/projected/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-kube-api-access-2gmcc\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.534813 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.534847 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-config\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.535719 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-config\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.535836 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.544580 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.548230 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.556387 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.556476 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gmcc\" (UniqueName: \"kubernetes.io/projected/95c39f8d-6f5d-4c8e-8505-9cff1c6da497-kube-api-access-2gmcc\") pod \"logging-loki-distributor-76cc67bf56-p22sf\" (UID: \"95c39f8d-6f5d-4c8e-8505-9cff1c6da497\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.565691 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.570607 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.570933 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.593913 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.633901 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.634304 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.634974 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.639906 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.639942 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a31444-3d60-49f9-b39e-dd8b79cc4195-config\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.639982 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.640018 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvshd\" (UniqueName: \"kubernetes.io/projected/b8a31444-3d60-49f9-b39e-dd8b79cc4195-kube-api-access-jvshd\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.640052 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.640088 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.648895 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.649115 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.649223 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.649322 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.649431 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.649524 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-8rfqv" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.660727 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.667391 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.668328 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.680108 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq"] Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.740975 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvshd\" (UniqueName: \"kubernetes.io/projected/b8a31444-3d60-49f9-b39e-dd8b79cc4195-kube-api-access-jvshd\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741281 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741306 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-lokistack-gateway\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741354 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741378 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741408 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/eb9aef1a-068b-494d-ba15-f49b97fed99c-tenants\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741460 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4ks\" (UniqueName: \"kubernetes.io/projected/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-kube-api-access-wb4ks\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741481 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741497 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-config\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741525 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-rbac\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741613 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrlp4\" (UniqueName: \"kubernetes.io/projected/eb9aef1a-068b-494d-ba15-f49b97fed99c-kube-api-access-wrlp4\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741632 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741650 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a31444-3d60-49f9-b39e-dd8b79cc4195-config\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741689 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/eb9aef1a-068b-494d-ba15-f49b97fed99c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741708 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.741734 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.743283 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8a31444-3d60-49f9-b39e-dd8b79cc4195-config\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.743582 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.744218 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.744305 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/eb9aef1a-068b-494d-ba15-f49b97fed99c-tls-secret\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.746148 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.746209 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.746827 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b8a31444-3d60-49f9-b39e-dd8b79cc4195-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.765409 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvshd\" (UniqueName: \"kubernetes.io/projected/b8a31444-3d60-49f9-b39e-dd8b79cc4195-kube-api-access-jvshd\") pod \"logging-loki-querier-5895d59bb8-td5fh\" (UID: \"b8a31444-3d60-49f9-b39e-dd8b79cc4195\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.816306 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849361 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/eb9aef1a-068b-494d-ba15-f49b97fed99c-tls-secret\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849408 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-rbac\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849446 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fd90b911-3db7-49be-8c84-42d05d55e4d3-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849467 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-lokistack-gateway\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849522 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849542 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xhdw\" (UniqueName: \"kubernetes.io/projected/fd90b911-3db7-49be-8c84-42d05d55e4d3-kube-api-access-2xhdw\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849562 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849584 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/eb9aef1a-068b-494d-ba15-f49b97fed99c-tenants\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849616 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4ks\" (UniqueName: \"kubernetes.io/projected/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-kube-api-access-wb4ks\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849655 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fd90b911-3db7-49be-8c84-42d05d55e4d3-tls-secret\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849687 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-config\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849707 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-rbac\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849723 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrlp4\" (UniqueName: \"kubernetes.io/projected/eb9aef1a-068b-494d-ba15-f49b97fed99c-kube-api-access-wrlp4\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849742 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-lokistack-gateway\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849767 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fd90b911-3db7-49be-8c84-42d05d55e4d3-tenants\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849792 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/eb9aef1a-068b-494d-ba15-f49b97fed99c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849807 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.849823 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.850838 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.851339 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.851996 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-config\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.852795 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-rbac\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.853686 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.854272 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/eb9aef1a-068b-494d-ba15-f49b97fed99c-tls-secret\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.854887 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/eb9aef1a-068b-494d-ba15-f49b97fed99c-lokistack-gateway\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.861072 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.861574 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.871058 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4ks\" (UniqueName: \"kubernetes.io/projected/4990ddc1-fd57-44bd-a4e9-a3b63f5f3920-kube-api-access-wb4ks\") pod \"logging-loki-query-frontend-84558f7c9f-mv68d\" (UID: \"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.872489 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrlp4\" (UniqueName: \"kubernetes.io/projected/eb9aef1a-068b-494d-ba15-f49b97fed99c-kube-api-access-wrlp4\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.914470 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.951426 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-rbac\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.951489 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fd90b911-3db7-49be-8c84-42d05d55e4d3-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.951547 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.951569 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xhdw\" (UniqueName: \"kubernetes.io/projected/fd90b911-3db7-49be-8c84-42d05d55e4d3-kube-api-access-2xhdw\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.951619 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.951642 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fd90b911-3db7-49be-8c84-42d05d55e4d3-tls-secret\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.951668 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-lokistack-gateway\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.951695 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fd90b911-3db7-49be-8c84-42d05d55e4d3-tenants\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.952463 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-rbac\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.953227 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.953309 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.954162 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fd90b911-3db7-49be-8c84-42d05d55e4d3-lokistack-gateway\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.955620 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fd90b911-3db7-49be-8c84-42d05d55e4d3-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.956673 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fd90b911-3db7-49be-8c84-42d05d55e4d3-tenants\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.957475 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fd90b911-3db7-49be-8c84-42d05d55e4d3-tls-secret\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:27 crc kubenswrapper[4853]: I1209 17:11:27.970658 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xhdw\" (UniqueName: \"kubernetes.io/projected/fd90b911-3db7-49be-8c84-42d05d55e4d3-kube-api-access-2xhdw\") pod \"logging-loki-gateway-5f5f8575b6-qw6dq\" (UID: \"fd90b911-3db7-49be-8c84-42d05d55e4d3\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.085914 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.126169 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-p22sf"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.195559 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/eb9aef1a-068b-494d-ba15-f49b97fed99c-tenants\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.196642 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/eb9aef1a-068b-494d-ba15-f49b97fed99c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f5f8575b6-7sclr\" (UID: \"eb9aef1a-068b-494d-ba15-f49b97fed99c\") " pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.288793 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-td5fh"] Dec 09 17:11:28 crc kubenswrapper[4853]: W1209 17:11:28.293488 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a31444_3d60_49f9_b39e_dd8b79cc4195.slice/crio-3f47ee31473dcf4c3ab7ca00d799f32206e4237fbe8afaca6e9ab7bde649b0f0 WatchSource:0}: Error finding container 3f47ee31473dcf4c3ab7ca00d799f32206e4237fbe8afaca6e9ab7bde649b0f0: Status 404 returned error can't find the container with id 3f47ee31473dcf4c3ab7ca00d799f32206e4237fbe8afaca6e9ab7bde649b0f0 Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.348574 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.478940 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.479780 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.489302 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.489668 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.500485 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.534101 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k2kdj"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.535775 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.553362 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2kdj"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.584777 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.586570 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.587816 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.603166 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.609763 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.625860 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.626924 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.631745 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.631983 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.634801 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.655837 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" event={"ID":"b8a31444-3d60-49f9-b39e-dd8b79cc4195","Type":"ContainerStarted","Data":"3f47ee31473dcf4c3ab7ca00d799f32206e4237fbe8afaca6e9ab7bde649b0f0"} Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.661830 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" event={"ID":"95c39f8d-6f5d-4c8e-8505-9cff1c6da497","Type":"ContainerStarted","Data":"f0896a9133a93eb78b1bb2e491f7c553c80b28f7098fc85e1942aa71f75549a9"} Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664282 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-catalog-content\") pod \"community-operators-k2kdj\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664329 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9bwr\" (UniqueName: \"kubernetes.io/projected/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-kube-api-access-v9bwr\") pod \"community-operators-k2kdj\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664358 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664388 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-34d836e0-91af-4d6c-b4ad-1dd968ce4b60\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34d836e0-91af-4d6c-b4ad-1dd968ce4b60\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664429 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqndz\" (UniqueName: \"kubernetes.io/projected/e056741b-93d1-44c3-a16d-b6a04a3e1979-kube-api-access-cqndz\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664492 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e056741b-93d1-44c3-a16d-b6a04a3e1979-config\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664517 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-utilities\") pod \"community-operators-k2kdj\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-749968ca-4298-436a-8ed6-9a7b61084c4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-749968ca-4298-436a-8ed6-9a7b61084c4c\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664579 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.664642 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.685589 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.735339 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d"] Dec 09 17:11:28 crc kubenswrapper[4853]: W1209 17:11:28.737335 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4990ddc1_fd57_44bd_a4e9_a3b63f5f3920.slice/crio-e40c2b0ef1f79e0fcb35b4b51edde4d5e28fb965b5b1f3391200178f9a2e95ca WatchSource:0}: Error finding container e40c2b0ef1f79e0fcb35b4b51edde4d5e28fb965b5b1f3391200178f9a2e95ca: Status 404 returned error can't find the container with id e40c2b0ef1f79e0fcb35b4b51edde4d5e28fb965b5b1f3391200178f9a2e95ca Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768038 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768104 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768151 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768180 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56c6bbba-7699-40bf-a5ed-931fc36ff3e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c6bbba-7699-40bf-a5ed-931fc36ff3e1\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768216 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768232 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768252 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768314 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9bwr\" (UniqueName: \"kubernetes.io/projected/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-kube-api-access-v9bwr\") pod \"community-operators-k2kdj\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768332 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768368 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768460 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqndz\" (UniqueName: \"kubernetes.io/projected/e056741b-93d1-44c3-a16d-b6a04a3e1979-kube-api-access-cqndz\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768495 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8fc95fa-db46-4a17-86ab-f6a8067baaf1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8fc95fa-db46-4a17-86ab-f6a8067baaf1\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768541 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e056741b-93d1-44c3-a16d-b6a04a3e1979-config\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-utilities\") pod \"community-operators-k2kdj\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768614 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768652 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-749968ca-4298-436a-8ed6-9a7b61084c4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-749968ca-4298-436a-8ed6-9a7b61084c4c\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768692 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzz2\" (UniqueName: \"kubernetes.io/projected/406b84c5-9e7a-4caa-8d3d-561834086d10-kube-api-access-whzz2\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768723 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768787 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-catalog-content\") pod \"community-operators-k2kdj\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768823 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-config\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768844 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-34d836e0-91af-4d6c-b4ad-1dd968ce4b60\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34d836e0-91af-4d6c-b4ad-1dd968ce4b60\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768871 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768908 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd4lh\" (UniqueName: \"kubernetes.io/projected/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-kube-api-access-pd4lh\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.768950 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/406b84c5-9e7a-4caa-8d3d-561834086d10-config\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.769393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.769622 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-catalog-content\") pod \"community-operators-k2kdj\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.769723 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-utilities\") pod \"community-operators-k2kdj\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.773811 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.777033 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e056741b-93d1-44c3-a16d-b6a04a3e1979-config\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.787219 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9bwr\" (UniqueName: \"kubernetes.io/projected/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-kube-api-access-v9bwr\") pod \"community-operators-k2kdj\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.789949 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqndz\" (UniqueName: \"kubernetes.io/projected/e056741b-93d1-44c3-a16d-b6a04a3e1979-kube-api-access-cqndz\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.790540 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.793155 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e056741b-93d1-44c3-a16d-b6a04a3e1979-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.793435 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.793484 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-34d836e0-91af-4d6c-b4ad-1dd968ce4b60\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34d836e0-91af-4d6c-b4ad-1dd968ce4b60\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/765106ca6eb99b9fb7d51819a4aaa396c65efbf6265c6801013916e120dca065/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.793579 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.793640 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-749968ca-4298-436a-8ed6-9a7b61084c4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-749968ca-4298-436a-8ed6-9a7b61084c4c\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/abd4d7e1e0f9325c06893ca9f4bff0cee53bf33455782c490c09c40cc6747285/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.833262 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-749968ca-4298-436a-8ed6-9a7b61084c4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-749968ca-4298-436a-8ed6-9a7b61084c4c\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.866207 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-34d836e0-91af-4d6c-b4ad-1dd968ce4b60\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34d836e0-91af-4d6c-b4ad-1dd968ce4b60\") pod \"logging-loki-ingester-0\" (UID: \"e056741b-93d1-44c3-a16d-b6a04a3e1979\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870552 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870609 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f8fc95fa-db46-4a17-86ab-f6a8067baaf1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8fc95fa-db46-4a17-86ab-f6a8067baaf1\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870641 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870671 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whzz2\" (UniqueName: \"kubernetes.io/projected/406b84c5-9e7a-4caa-8d3d-561834086d10-kube-api-access-whzz2\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870714 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-config\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870736 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870756 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd4lh\" (UniqueName: \"kubernetes.io/projected/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-kube-api-access-pd4lh\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870778 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/406b84c5-9e7a-4caa-8d3d-561834086d10-config\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870799 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870823 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870848 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56c6bbba-7699-40bf-a5ed-931fc36ff3e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c6bbba-7699-40bf-a5ed-931fc36ff3e1\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870868 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870884 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.870907 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.874375 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.874694 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/406b84c5-9e7a-4caa-8d3d-561834086d10-config\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.874793 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.874852 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.875121 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.877117 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f8fc95fa-db46-4a17-86ab-f6a8067baaf1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8fc95fa-db46-4a17-86ab-f6a8067baaf1\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d4c381bae2c0aedc13ca8c10eab17e730465bce32c2cff4a4f98633d2faf0af4/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.875477 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.876233 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.886518 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/406b84c5-9e7a-4caa-8d3d-561834086d10-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.888286 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.888334 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56c6bbba-7699-40bf-a5ed-931fc36ff3e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c6bbba-7699-40bf-a5ed-931fc36ff3e1\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a9fa87b6c53488c93f1a08542e1aa1c3360fa3be44b380affc97ebd36e6c1947/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.888436 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.896265 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr"] Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.898933 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whzz2\" (UniqueName: \"kubernetes.io/projected/406b84c5-9e7a-4caa-8d3d-561834086d10-kube-api-access-whzz2\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:28 crc kubenswrapper[4853]: W1209 17:11:28.899286 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb9aef1a_068b_494d_ba15_f49b97fed99c.slice/crio-f2e7a9950fd219b9ddde8c77c8a83f844d4e3dbcd9f3635ab690acf44a173aac WatchSource:0}: Error finding container f2e7a9950fd219b9ddde8c77c8a83f844d4e3dbcd9f3635ab690acf44a173aac: Status 404 returned error can't find the container with id f2e7a9950fd219b9ddde8c77c8a83f844d4e3dbcd9f3635ab690acf44a173aac Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.938042 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:28 crc kubenswrapper[4853]: I1209 17:11:28.954254 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56c6bbba-7699-40bf-a5ed-931fc36ff3e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56c6bbba-7699-40bf-a5ed-931fc36ff3e1\") pod \"logging-loki-compactor-0\" (UID: \"406b84c5-9e7a-4caa-8d3d-561834086d10\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.098962 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.249207 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.285240 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-config\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.288234 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.308325 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd4lh\" (UniqueName: \"kubernetes.io/projected/86ebc563-5cec-468c-97fd-b1a7b5e1f4a2-kube-api-access-pd4lh\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.476398 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f8fc95fa-db46-4a17-86ab-f6a8067baaf1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f8fc95fa-db46-4a17-86ab-f6a8067baaf1\") pod \"logging-loki-index-gateway-0\" (UID: \"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.483271 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2kdj"] Dec 09 17:11:29 crc kubenswrapper[4853]: W1209 17:11:29.484784 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9a4cb2_8cc7_40e7_ab33_4050bab331c4.slice/crio-8d8ecb0212152461dd8a9ccb3b52f3ae5f601805c3e1e11657b6f38030cefda3 WatchSource:0}: Error finding container 8d8ecb0212152461dd8a9ccb3b52f3ae5f601805c3e1e11657b6f38030cefda3: Status 404 returned error can't find the container with id 8d8ecb0212152461dd8a9ccb3b52f3ae5f601805c3e1e11657b6f38030cefda3 Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.492141 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.564789 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.632518 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.687981 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" event={"ID":"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920","Type":"ContainerStarted","Data":"e40c2b0ef1f79e0fcb35b4b51edde4d5e28fb965b5b1f3391200178f9a2e95ca"} Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.704230 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" event={"ID":"eb9aef1a-068b-494d-ba15-f49b97fed99c","Type":"ContainerStarted","Data":"f2e7a9950fd219b9ddde8c77c8a83f844d4e3dbcd9f3635ab690acf44a173aac"} Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.706327 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"406b84c5-9e7a-4caa-8d3d-561834086d10","Type":"ContainerStarted","Data":"cc7717534d65accce9a38196ec3da50c85e79df2e7f3a9065eba51f71f6a6551"} Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.709336 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" event={"ID":"fd90b911-3db7-49be-8c84-42d05d55e4d3","Type":"ContainerStarted","Data":"075cc24990a60b6612a2880d077934da1293cd9f3e69c608ad380cbab70fe031"} Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.711323 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2kdj" event={"ID":"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4","Type":"ContainerStarted","Data":"8d8ecb0212152461dd8a9ccb3b52f3ae5f601805c3e1e11657b6f38030cefda3"} Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.712447 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"e056741b-93d1-44c3-a16d-b6a04a3e1979","Type":"ContainerStarted","Data":"1443b1af550b34883c497632d57a4837828d3c7e0695cf84ba597b870c72a9a2"} Dec 09 17:11:29 crc kubenswrapper[4853]: I1209 17:11:29.898834 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.002514 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvg2d\" (UniqueName: \"kubernetes.io/projected/9381fb79-81a2-477f-b416-aef4fdff3d46-kube-api-access-gvg2d\") pod \"9381fb79-81a2-477f-b416-aef4fdff3d46\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.002580 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-utilities\") pod \"9381fb79-81a2-477f-b416-aef4fdff3d46\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.002674 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-catalog-content\") pod \"9381fb79-81a2-477f-b416-aef4fdff3d46\" (UID: \"9381fb79-81a2-477f-b416-aef4fdff3d46\") " Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.004650 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-utilities" (OuterVolumeSpecName: "utilities") pod "9381fb79-81a2-477f-b416-aef4fdff3d46" (UID: "9381fb79-81a2-477f-b416-aef4fdff3d46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.008746 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9381fb79-81a2-477f-b416-aef4fdff3d46-kube-api-access-gvg2d" (OuterVolumeSpecName: "kube-api-access-gvg2d") pod "9381fb79-81a2-477f-b416-aef4fdff3d46" (UID: "9381fb79-81a2-477f-b416-aef4fdff3d46"). InnerVolumeSpecName "kube-api-access-gvg2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.024810 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9381fb79-81a2-477f-b416-aef4fdff3d46" (UID: "9381fb79-81a2-477f-b416-aef4fdff3d46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.076077 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.104878 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.104919 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvg2d\" (UniqueName: \"kubernetes.io/projected/9381fb79-81a2-477f-b416-aef4fdff3d46-kube-api-access-gvg2d\") on node \"crc\" DevicePath \"\"" Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.104934 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9381fb79-81a2-477f-b416-aef4fdff3d46-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.731936 4853 generic.go:334] "Generic (PLEG): container finished" podID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerID="cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545" exitCode=0 Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.732761 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2kdj" event={"ID":"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4","Type":"ContainerDied","Data":"cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545"} Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.737664 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4vt7" Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.737953 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4vt7" event={"ID":"9381fb79-81a2-477f-b416-aef4fdff3d46","Type":"ContainerDied","Data":"b2a995d91f2506904fb03651858cc09e14f0ef2647165e52ce71fa1489f7de93"} Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.737994 4853 scope.go:117] "RemoveContainer" containerID="8653791efe6a093d30316a72bf836282ceca095f97209452ec60d3a9efaaa480" Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.757211 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2","Type":"ContainerStarted","Data":"d8ee1e122bbf9f3b0c96d514d3d4020e659aae713e9ee86725b2c8d824add228"} Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.773385 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4vt7"] Dec 09 17:11:30 crc kubenswrapper[4853]: I1209 17:11:30.779968 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4vt7"] Dec 09 17:11:31 crc kubenswrapper[4853]: I1209 17:11:31.423096 4853 scope.go:117] "RemoveContainer" containerID="19d8074fff8afdfbb3297e2b8a462b50ad24006638151d5cd2778da6f7b8eddd" Dec 09 17:11:31 crc kubenswrapper[4853]: I1209 17:11:31.577385 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9381fb79-81a2-477f-b416-aef4fdff3d46" path="/var/lib/kubelet/pods/9381fb79-81a2-477f-b416-aef4fdff3d46/volumes" Dec 09 17:11:32 crc kubenswrapper[4853]: I1209 17:11:32.715885 4853 scope.go:117] "RemoveContainer" containerID="7fc2e5c6fd56ef7ca3b382d4124f6785233b982ab293989b4235a9991ded4d79" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.784487 4853 generic.go:334] "Generic (PLEG): container finished" podID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerID="1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594" exitCode=0 Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.785683 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2kdj" event={"ID":"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4","Type":"ContainerDied","Data":"1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594"} Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.788718 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" event={"ID":"4990ddc1-fd57-44bd-a4e9-a3b63f5f3920","Type":"ContainerStarted","Data":"599dd37e839d54dd3639cd2008745edf504d59b1bff61a525302d8071d259a59"} Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.788763 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.790316 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" event={"ID":"95c39f8d-6f5d-4c8e-8505-9cff1c6da497","Type":"ContainerStarted","Data":"84419ccde1751e493818c54a4bca34dca1b80bb5ad533a6df56b480716e56cc7"} Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.790406 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.793062 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"e056741b-93d1-44c3-a16d-b6a04a3e1979","Type":"ContainerStarted","Data":"5357511c3d042f80c83b16683c6881b9a7b966dbfa511643d4389a592c88f119"} Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.793171 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.795908 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" event={"ID":"eb9aef1a-068b-494d-ba15-f49b97fed99c","Type":"ContainerStarted","Data":"e9356843690ed46dea6ab2249f86f195506ed0ba04e2f2c6d0bc029bae0b449a"} Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.797507 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" event={"ID":"b8a31444-3d60-49f9-b39e-dd8b79cc4195","Type":"ContainerStarted","Data":"da43414ce89e04293777bee62caeee90c66a49a25179010fbe07545be393d4ed"} Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.798133 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.800082 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"406b84c5-9e7a-4caa-8d3d-561834086d10","Type":"ContainerStarted","Data":"e45062956b00503d3a7e0b8bbf42339f95a641e74d2d7d388fac8692c4677cd6"} Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.800292 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.801572 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" event={"ID":"fd90b911-3db7-49be-8c84-42d05d55e4d3","Type":"ContainerStarted","Data":"b60be21fb20d217e4023ec18f103fadf9a0ff17bf5850a38cba3966a69d7d610"} Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.828507 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.061373037 podStartE2EDuration="6.828488078s" podCreationTimestamp="2025-12-09 17:11:27 +0000 UTC" firstStartedPulling="2025-12-09 17:11:29.540106069 +0000 UTC m=+916.474845251" lastFinishedPulling="2025-12-09 17:11:33.30722111 +0000 UTC m=+920.241960292" observedRunningTime="2025-12-09 17:11:33.824148616 +0000 UTC m=+920.758887808" watchObservedRunningTime="2025-12-09 17:11:33.828488078 +0000 UTC m=+920.763227260" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.848832 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" podStartSLOduration=2.747487231 podStartE2EDuration="6.848808447s" podCreationTimestamp="2025-12-09 17:11:27 +0000 UTC" firstStartedPulling="2025-12-09 17:11:28.740023628 +0000 UTC m=+915.674762810" lastFinishedPulling="2025-12-09 17:11:32.841344844 +0000 UTC m=+919.776084026" observedRunningTime="2025-12-09 17:11:33.847320456 +0000 UTC m=+920.782059668" watchObservedRunningTime="2025-12-09 17:11:33.848808447 +0000 UTC m=+920.783547629" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.878424 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" podStartSLOduration=2.177950881 podStartE2EDuration="6.878401597s" podCreationTimestamp="2025-12-09 17:11:27 +0000 UTC" firstStartedPulling="2025-12-09 17:11:28.296710784 +0000 UTC m=+915.231449976" lastFinishedPulling="2025-12-09 17:11:32.99716151 +0000 UTC m=+919.931900692" observedRunningTime="2025-12-09 17:11:33.875562937 +0000 UTC m=+920.810302149" watchObservedRunningTime="2025-12-09 17:11:33.878401597 +0000 UTC m=+920.813140809" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.896275 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" podStartSLOduration=2.103954815 podStartE2EDuration="6.896253876s" podCreationTimestamp="2025-12-09 17:11:27 +0000 UTC" firstStartedPulling="2025-12-09 17:11:28.20842099 +0000 UTC m=+915.143160172" lastFinishedPulling="2025-12-09 17:11:33.000720041 +0000 UTC m=+919.935459233" observedRunningTime="2025-12-09 17:11:33.89102934 +0000 UTC m=+920.825768522" watchObservedRunningTime="2025-12-09 17:11:33.896253876 +0000 UTC m=+920.830993058" Dec 09 17:11:33 crc kubenswrapper[4853]: I1209 17:11:33.912310 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.565523066 podStartE2EDuration="6.912288066s" podCreationTimestamp="2025-12-09 17:11:27 +0000 UTC" firstStartedPulling="2025-12-09 17:11:29.653383394 +0000 UTC m=+916.588122576" lastFinishedPulling="2025-12-09 17:11:33.000148384 +0000 UTC m=+919.934887576" observedRunningTime="2025-12-09 17:11:33.906651938 +0000 UTC m=+920.841391130" watchObservedRunningTime="2025-12-09 17:11:33.912288066 +0000 UTC m=+920.847027258" Dec 09 17:11:34 crc kubenswrapper[4853]: I1209 17:11:34.819301 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"86ebc563-5cec-468c-97fd-b1a7b5e1f4a2","Type":"ContainerStarted","Data":"735199e0e8b37e4ac8c23d6b60d9c25342f500542c4ca055b074457e5513a3b1"} Dec 09 17:11:34 crc kubenswrapper[4853]: I1209 17:11:34.819677 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:34 crc kubenswrapper[4853]: I1209 17:11:34.823743 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2kdj" event={"ID":"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4","Type":"ContainerStarted","Data":"127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb"} Dec 09 17:11:34 crc kubenswrapper[4853]: I1209 17:11:34.873492 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.975076583 podStartE2EDuration="7.873475132s" podCreationTimestamp="2025-12-09 17:11:27 +0000 UTC" firstStartedPulling="2025-12-09 17:11:30.086590154 +0000 UTC m=+917.021329336" lastFinishedPulling="2025-12-09 17:11:33.984988703 +0000 UTC m=+920.919727885" observedRunningTime="2025-12-09 17:11:34.843640146 +0000 UTC m=+921.778379338" watchObservedRunningTime="2025-12-09 17:11:34.873475132 +0000 UTC m=+921.808214314" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.845586 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" event={"ID":"fd90b911-3db7-49be-8c84-42d05d55e4d3","Type":"ContainerStarted","Data":"63f7890d2e110302882646446ec262d51c389c23c1e0569610f248510292d5e3"} Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.846014 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.846178 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.849099 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" event={"ID":"eb9aef1a-068b-494d-ba15-f49b97fed99c","Type":"ContainerStarted","Data":"8fd96e20695f5d29e42663ea763bf43e58f0d1304ba40dfcaaa64e33702476ee"} Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.849552 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.849814 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.860511 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.860725 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.864939 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.865027 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.874012 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k2kdj" podStartSLOduration=5.235777827 podStartE2EDuration="8.873991015s" podCreationTimestamp="2025-12-09 17:11:28 +0000 UTC" firstStartedPulling="2025-12-09 17:11:30.734287405 +0000 UTC m=+917.669026587" lastFinishedPulling="2025-12-09 17:11:34.372500593 +0000 UTC m=+921.307239775" observedRunningTime="2025-12-09 17:11:34.875448928 +0000 UTC m=+921.810188130" watchObservedRunningTime="2025-12-09 17:11:36.873991015 +0000 UTC m=+923.808730197" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.876665 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-qw6dq" podStartSLOduration=2.633471485 podStartE2EDuration="9.876647119s" podCreationTimestamp="2025-12-09 17:11:27 +0000 UTC" firstStartedPulling="2025-12-09 17:11:28.688923375 +0000 UTC m=+915.623662557" lastFinishedPulling="2025-12-09 17:11:35.932099009 +0000 UTC m=+922.866838191" observedRunningTime="2025-12-09 17:11:36.875995541 +0000 UTC m=+923.810734723" watchObservedRunningTime="2025-12-09 17:11:36.876647119 +0000 UTC m=+923.811386301" Dec 09 17:11:36 crc kubenswrapper[4853]: I1209 17:11:36.914226 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f5f8575b6-7sclr" podStartSLOduration=2.897439953 podStartE2EDuration="9.914200462s" podCreationTimestamp="2025-12-09 17:11:27 +0000 UTC" firstStartedPulling="2025-12-09 17:11:28.906777061 +0000 UTC m=+915.841516243" lastFinishedPulling="2025-12-09 17:11:35.92353756 +0000 UTC m=+922.858276752" observedRunningTime="2025-12-09 17:11:36.912950157 +0000 UTC m=+923.847689349" watchObservedRunningTime="2025-12-09 17:11:36.914200462 +0000 UTC m=+923.848939644" Dec 09 17:11:38 crc kubenswrapper[4853]: I1209 17:11:38.939620 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:38 crc kubenswrapper[4853]: I1209 17:11:38.939964 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:39 crc kubenswrapper[4853]: I1209 17:11:39.004813 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:39 crc kubenswrapper[4853]: I1209 17:11:39.907751 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:41 crc kubenswrapper[4853]: I1209 17:11:41.516855 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k2kdj"] Dec 09 17:11:41 crc kubenswrapper[4853]: I1209 17:11:41.881580 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k2kdj" podUID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerName="registry-server" containerID="cri-o://127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb" gracePeriod=2 Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.319622 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.453880 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9bwr\" (UniqueName: \"kubernetes.io/projected/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-kube-api-access-v9bwr\") pod \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.453935 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-utilities\") pod \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.454040 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-catalog-content\") pod \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\" (UID: \"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4\") " Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.454835 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-utilities" (OuterVolumeSpecName: "utilities") pod "ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" (UID: "ea9a4cb2-8cc7-40e7-ab33-4050bab331c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.462139 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-kube-api-access-v9bwr" (OuterVolumeSpecName: "kube-api-access-v9bwr") pod "ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" (UID: "ea9a4cb2-8cc7-40e7-ab33-4050bab331c4"). InnerVolumeSpecName "kube-api-access-v9bwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.520054 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" (UID: "ea9a4cb2-8cc7-40e7-ab33-4050bab331c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.557613 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9bwr\" (UniqueName: \"kubernetes.io/projected/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-kube-api-access-v9bwr\") on node \"crc\" DevicePath \"\"" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.557661 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.557678 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.890863 4853 generic.go:334] "Generic (PLEG): container finished" podID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerID="127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb" exitCode=0 Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.890968 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2kdj" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.890964 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2kdj" event={"ID":"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4","Type":"ContainerDied","Data":"127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb"} Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.891383 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2kdj" event={"ID":"ea9a4cb2-8cc7-40e7-ab33-4050bab331c4","Type":"ContainerDied","Data":"8d8ecb0212152461dd8a9ccb3b52f3ae5f601805c3e1e11657b6f38030cefda3"} Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.891405 4853 scope.go:117] "RemoveContainer" containerID="127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.920449 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k2kdj"] Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.920724 4853 scope.go:117] "RemoveContainer" containerID="1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594" Dec 09 17:11:42 crc kubenswrapper[4853]: I1209 17:11:42.927294 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k2kdj"] Dec 09 17:11:43 crc kubenswrapper[4853]: I1209 17:11:43.496552 4853 scope.go:117] "RemoveContainer" containerID="cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545" Dec 09 17:11:43 crc kubenswrapper[4853]: I1209 17:11:43.534537 4853 scope.go:117] "RemoveContainer" containerID="127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb" Dec 09 17:11:43 crc kubenswrapper[4853]: E1209 17:11:43.536008 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb\": container with ID starting with 127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb not found: ID does not exist" containerID="127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb" Dec 09 17:11:43 crc kubenswrapper[4853]: I1209 17:11:43.536068 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb"} err="failed to get container status \"127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb\": rpc error: code = NotFound desc = could not find container \"127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb\": container with ID starting with 127c46ba6b73c31206ffcb1a64b33b42371e01d1b90301304f330adc0d4e7fcb not found: ID does not exist" Dec 09 17:11:43 crc kubenswrapper[4853]: I1209 17:11:43.536106 4853 scope.go:117] "RemoveContainer" containerID="1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594" Dec 09 17:11:43 crc kubenswrapper[4853]: E1209 17:11:43.536500 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594\": container with ID starting with 1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594 not found: ID does not exist" containerID="1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594" Dec 09 17:11:43 crc kubenswrapper[4853]: I1209 17:11:43.536527 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594"} err="failed to get container status \"1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594\": rpc error: code = NotFound desc = could not find container \"1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594\": container with ID starting with 1ad742d8ceada1c4456724810ef766747e18463591375daf5062eff9a5863594 not found: ID does not exist" Dec 09 17:11:43 crc kubenswrapper[4853]: I1209 17:11:43.536542 4853 scope.go:117] "RemoveContainer" containerID="cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545" Dec 09 17:11:43 crc kubenswrapper[4853]: E1209 17:11:43.536764 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545\": container with ID starting with cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545 not found: ID does not exist" containerID="cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545" Dec 09 17:11:43 crc kubenswrapper[4853]: I1209 17:11:43.536786 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545"} err="failed to get container status \"cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545\": rpc error: code = NotFound desc = could not find container \"cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545\": container with ID starting with cc0c749bf34e6864324df6f210ae0807423252373a19af7a547d950ee5806545 not found: ID does not exist" Dec 09 17:11:43 crc kubenswrapper[4853]: I1209 17:11:43.592735 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" path="/var/lib/kubelet/pods/ea9a4cb2-8cc7-40e7-ab33-4050bab331c4/volumes" Dec 09 17:11:49 crc kubenswrapper[4853]: I1209 17:11:49.110650 4853 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Dec 09 17:11:49 crc kubenswrapper[4853]: I1209 17:11:49.112885 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e056741b-93d1-44c3-a16d-b6a04a3e1979" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 17:11:49 crc kubenswrapper[4853]: I1209 17:11:49.255585 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Dec 09 17:11:49 crc kubenswrapper[4853]: I1209 17:11:49.574213 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 17:11:57 crc kubenswrapper[4853]: I1209 17:11:57.645572 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-p22sf" Dec 09 17:11:57 crc kubenswrapper[4853]: I1209 17:11:57.826040 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-td5fh" Dec 09 17:11:57 crc kubenswrapper[4853]: I1209 17:11:57.924081 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-mv68d" Dec 09 17:11:59 crc kubenswrapper[4853]: I1209 17:11:59.108302 4853 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Dec 09 17:11:59 crc kubenswrapper[4853]: I1209 17:11:59.108355 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e056741b-93d1-44c3-a16d-b6a04a3e1979" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.103778 4853 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.104401 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e056741b-93d1-44c3-a16d-b6a04a3e1979" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.467460 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gkwsx"] Dec 09 17:12:09 crc kubenswrapper[4853]: E1209 17:12:09.468116 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerName="extract-utilities" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.468227 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerName="extract-utilities" Dec 09 17:12:09 crc kubenswrapper[4853]: E1209 17:12:09.468315 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerName="extract-content" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.468393 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerName="extract-content" Dec 09 17:12:09 crc kubenswrapper[4853]: E1209 17:12:09.468477 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerName="registry-server" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.468559 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerName="registry-server" Dec 09 17:12:09 crc kubenswrapper[4853]: E1209 17:12:09.468735 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerName="extract-content" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.468861 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerName="extract-content" Dec 09 17:12:09 crc kubenswrapper[4853]: E1209 17:12:09.468983 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerName="registry-server" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.469068 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerName="registry-server" Dec 09 17:12:09 crc kubenswrapper[4853]: E1209 17:12:09.469151 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerName="extract-utilities" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.469228 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerName="extract-utilities" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.469457 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9381fb79-81a2-477f-b416-aef4fdff3d46" containerName="registry-server" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.469563 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9a4cb2-8cc7-40e7-ab33-4050bab331c4" containerName="registry-server" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.471536 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.483315 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkwsx"] Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.615051 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdcxn\" (UniqueName: \"kubernetes.io/projected/9ae33384-8780-4a42-9e76-e022adc9fa86-kube-api-access-kdcxn\") pod \"certified-operators-gkwsx\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.615136 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-utilities\") pod \"certified-operators-gkwsx\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.615185 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-catalog-content\") pod \"certified-operators-gkwsx\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.716349 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-utilities\") pod \"certified-operators-gkwsx\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.716423 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-catalog-content\") pod \"certified-operators-gkwsx\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.716490 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdcxn\" (UniqueName: \"kubernetes.io/projected/9ae33384-8780-4a42-9e76-e022adc9fa86-kube-api-access-kdcxn\") pod \"certified-operators-gkwsx\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.717770 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-utilities\") pod \"certified-operators-gkwsx\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.717968 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-catalog-content\") pod \"certified-operators-gkwsx\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.742769 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdcxn\" (UniqueName: \"kubernetes.io/projected/9ae33384-8780-4a42-9e76-e022adc9fa86-kube-api-access-kdcxn\") pod \"certified-operators-gkwsx\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:09 crc kubenswrapper[4853]: I1209 17:12:09.806701 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:10 crc kubenswrapper[4853]: I1209 17:12:10.341734 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkwsx"] Dec 09 17:12:11 crc kubenswrapper[4853]: I1209 17:12:11.148956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwsx" event={"ID":"9ae33384-8780-4a42-9e76-e022adc9fa86","Type":"ContainerStarted","Data":"1c645c35065100e3b90e28a651968a38d952ca3223ca5686001abdfe1481c1b8"} Dec 09 17:12:12 crc kubenswrapper[4853]: I1209 17:12:12.155965 4853 generic.go:334] "Generic (PLEG): container finished" podID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerID="b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d" exitCode=0 Dec 09 17:12:12 crc kubenswrapper[4853]: I1209 17:12:12.156064 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwsx" event={"ID":"9ae33384-8780-4a42-9e76-e022adc9fa86","Type":"ContainerDied","Data":"b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d"} Dec 09 17:12:13 crc kubenswrapper[4853]: I1209 17:12:13.165927 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwsx" event={"ID":"9ae33384-8780-4a42-9e76-e022adc9fa86","Type":"ContainerStarted","Data":"41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f"} Dec 09 17:12:14 crc kubenswrapper[4853]: I1209 17:12:14.173710 4853 generic.go:334] "Generic (PLEG): container finished" podID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerID="41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f" exitCode=0 Dec 09 17:12:14 crc kubenswrapper[4853]: I1209 17:12:14.174032 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwsx" event={"ID":"9ae33384-8780-4a42-9e76-e022adc9fa86","Type":"ContainerDied","Data":"41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f"} Dec 09 17:12:15 crc kubenswrapper[4853]: I1209 17:12:15.183262 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwsx" event={"ID":"9ae33384-8780-4a42-9e76-e022adc9fa86","Type":"ContainerStarted","Data":"a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b"} Dec 09 17:12:15 crc kubenswrapper[4853]: I1209 17:12:15.204670 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gkwsx" podStartSLOduration=3.741584943 podStartE2EDuration="6.204642549s" podCreationTimestamp="2025-12-09 17:12:09 +0000 UTC" firstStartedPulling="2025-12-09 17:12:12.15803543 +0000 UTC m=+959.092774612" lastFinishedPulling="2025-12-09 17:12:14.621093016 +0000 UTC m=+961.555832218" observedRunningTime="2025-12-09 17:12:15.198341533 +0000 UTC m=+962.133080715" watchObservedRunningTime="2025-12-09 17:12:15.204642549 +0000 UTC m=+962.139381731" Dec 09 17:12:19 crc kubenswrapper[4853]: I1209 17:12:19.103951 4853 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Dec 09 17:12:19 crc kubenswrapper[4853]: I1209 17:12:19.104287 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e056741b-93d1-44c3-a16d-b6a04a3e1979" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 17:12:19 crc kubenswrapper[4853]: I1209 17:12:19.807775 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:19 crc kubenswrapper[4853]: I1209 17:12:19.808175 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:19 crc kubenswrapper[4853]: I1209 17:12:19.867287 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:20 crc kubenswrapper[4853]: I1209 17:12:20.272681 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:20 crc kubenswrapper[4853]: I1209 17:12:20.327811 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkwsx"] Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.240445 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gkwsx" podUID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerName="registry-server" containerID="cri-o://a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b" gracePeriod=2 Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.673068 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.820479 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-utilities\") pod \"9ae33384-8780-4a42-9e76-e022adc9fa86\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.820719 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdcxn\" (UniqueName: \"kubernetes.io/projected/9ae33384-8780-4a42-9e76-e022adc9fa86-kube-api-access-kdcxn\") pod \"9ae33384-8780-4a42-9e76-e022adc9fa86\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.820895 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-catalog-content\") pod \"9ae33384-8780-4a42-9e76-e022adc9fa86\" (UID: \"9ae33384-8780-4a42-9e76-e022adc9fa86\") " Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.822551 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-utilities" (OuterVolumeSpecName: "utilities") pod "9ae33384-8780-4a42-9e76-e022adc9fa86" (UID: "9ae33384-8780-4a42-9e76-e022adc9fa86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.829953 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae33384-8780-4a42-9e76-e022adc9fa86-kube-api-access-kdcxn" (OuterVolumeSpecName: "kube-api-access-kdcxn") pod "9ae33384-8780-4a42-9e76-e022adc9fa86" (UID: "9ae33384-8780-4a42-9e76-e022adc9fa86"). InnerVolumeSpecName "kube-api-access-kdcxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.875801 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ae33384-8780-4a42-9e76-e022adc9fa86" (UID: "9ae33384-8780-4a42-9e76-e022adc9fa86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.923487 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.923532 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae33384-8780-4a42-9e76-e022adc9fa86-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:22 crc kubenswrapper[4853]: I1209 17:12:22.923547 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdcxn\" (UniqueName: \"kubernetes.io/projected/9ae33384-8780-4a42-9e76-e022adc9fa86-kube-api-access-kdcxn\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.251093 4853 generic.go:334] "Generic (PLEG): container finished" podID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerID="a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b" exitCode=0 Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.251467 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkwsx" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.251775 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwsx" event={"ID":"9ae33384-8780-4a42-9e76-e022adc9fa86","Type":"ContainerDied","Data":"a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b"} Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.251880 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkwsx" event={"ID":"9ae33384-8780-4a42-9e76-e022adc9fa86","Type":"ContainerDied","Data":"1c645c35065100e3b90e28a651968a38d952ca3223ca5686001abdfe1481c1b8"} Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.251905 4853 scope.go:117] "RemoveContainer" containerID="a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.284056 4853 scope.go:117] "RemoveContainer" containerID="41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.288420 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkwsx"] Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.295562 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gkwsx"] Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.315426 4853 scope.go:117] "RemoveContainer" containerID="b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.331717 4853 scope.go:117] "RemoveContainer" containerID="a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b" Dec 09 17:12:23 crc kubenswrapper[4853]: E1209 17:12:23.332201 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b\": container with ID starting with a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b not found: ID does not exist" containerID="a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.332238 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b"} err="failed to get container status \"a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b\": rpc error: code = NotFound desc = could not find container \"a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b\": container with ID starting with a51818ef95db2117afe199449961f17b42f92c91b6b5d2213c31fb8dd20f3c5b not found: ID does not exist" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.332263 4853 scope.go:117] "RemoveContainer" containerID="41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f" Dec 09 17:12:23 crc kubenswrapper[4853]: E1209 17:12:23.332673 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f\": container with ID starting with 41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f not found: ID does not exist" containerID="41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.332703 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f"} err="failed to get container status \"41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f\": rpc error: code = NotFound desc = could not find container \"41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f\": container with ID starting with 41f9180b1e18d69279a77c3084306a3209f4177ced41c00243386d767be16e2f not found: ID does not exist" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.332723 4853 scope.go:117] "RemoveContainer" containerID="b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d" Dec 09 17:12:23 crc kubenswrapper[4853]: E1209 17:12:23.333132 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d\": container with ID starting with b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d not found: ID does not exist" containerID="b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.333160 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d"} err="failed to get container status \"b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d\": rpc error: code = NotFound desc = could not find container \"b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d\": container with ID starting with b36804d10e040993a88d94d1ecd9cb628b9cc1faf44213362a2ae1a415a11a7d not found: ID does not exist" Dec 09 17:12:23 crc kubenswrapper[4853]: I1209 17:12:23.580947 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae33384-8780-4a42-9e76-e022adc9fa86" path="/var/lib/kubelet/pods/9ae33384-8780-4a42-9e76-e022adc9fa86/volumes" Dec 09 17:12:29 crc kubenswrapper[4853]: I1209 17:12:29.109892 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.542292 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-nkfx5"] Dec 09 17:12:48 crc kubenswrapper[4853]: E1209 17:12:48.543129 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerName="extract-content" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.543147 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerName="extract-content" Dec 09 17:12:48 crc kubenswrapper[4853]: E1209 17:12:48.543165 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerName="extract-utilities" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.543173 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerName="extract-utilities" Dec 09 17:12:48 crc kubenswrapper[4853]: E1209 17:12:48.543186 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerName="registry-server" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.543194 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerName="registry-server" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.543351 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae33384-8780-4a42-9e76-e022adc9fa86" containerName="registry-server" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.544360 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.548970 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.549421 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.549631 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-nb9m2" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.553981 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.557834 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-nkfx5"] Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.558665 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.559696 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.615965 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-nkfx5"] Dec 09 17:12:48 crc kubenswrapper[4853]: E1209 17:12:48.616550 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-qhr9q metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-nkfx5" podUID="87b0e37c-d482-4c51-903c-a2fb823c22eb" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617612 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617660 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/87b0e37c-d482-4c51-903c-a2fb823c22eb-tmp\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617686 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-sa-token\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617707 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/87b0e37c-d482-4c51-903c-a2fb823c22eb-datadir\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617725 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-metrics\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617776 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-trusted-ca\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617812 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-syslog-receiver\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617835 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-token\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617867 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-entrypoint\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617956 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config-openshift-service-cacrt\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.617971 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhr9q\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-kube-api-access-qhr9q\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720266 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/87b0e37c-d482-4c51-903c-a2fb823c22eb-tmp\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720367 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-sa-token\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720398 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/87b0e37c-d482-4c51-903c-a2fb823c22eb-datadir\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720420 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-metrics\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720452 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-trusted-ca\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720489 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-syslog-receiver\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720512 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-token\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720545 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-entrypoint\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720551 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/87b0e37c-d482-4c51-903c-a2fb823c22eb-datadir\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720662 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config-openshift-service-cacrt\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720686 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhr9q\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-kube-api-access-qhr9q\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: E1209 17:12:48.720694 4853 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.720716 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: E1209 17:12:48.720756 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-syslog-receiver podName:87b0e37c-d482-4c51-903c-a2fb823c22eb nodeName:}" failed. No retries permitted until 2025-12-09 17:12:49.220734227 +0000 UTC m=+996.155473639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-syslog-receiver") pod "collector-nkfx5" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb") : secret "collector-syslog-receiver" not found Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.721829 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.721844 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config-openshift-service-cacrt\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.722091 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-trusted-ca\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.722398 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-entrypoint\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.726115 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-metrics\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.728252 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-token\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.737854 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/87b0e37c-d482-4c51-903c-a2fb823c22eb-tmp\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.738856 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-sa-token\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:48 crc kubenswrapper[4853]: I1209 17:12:48.739939 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhr9q\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-kube-api-access-qhr9q\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.228688 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-syslog-receiver\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.234306 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-syslog-receiver\") pod \"collector-nkfx5\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " pod="openshift-logging/collector-nkfx5" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.474715 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-nkfx5" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.492230 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-nkfx5" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.533771 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-entrypoint\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.533827 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-metrics\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.533857 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/87b0e37c-d482-4c51-903c-a2fb823c22eb-datadir\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.533880 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-token\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.533903 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhr9q\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-kube-api-access-qhr9q\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.533901 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87b0e37c-d482-4c51-903c-a2fb823c22eb-datadir" (OuterVolumeSpecName: "datadir") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.533936 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-sa-token\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.533959 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-syslog-receiver\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.534077 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.534101 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-trusted-ca\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.534130 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config-openshift-service-cacrt\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.534159 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/87b0e37c-d482-4c51-903c-a2fb823c22eb-tmp\") pod \"87b0e37c-d482-4c51-903c-a2fb823c22eb\" (UID: \"87b0e37c-d482-4c51-903c-a2fb823c22eb\") " Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.534344 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.534895 4853 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/87b0e37c-d482-4c51-903c-a2fb823c22eb-datadir\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.534950 4853 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-entrypoint\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.535046 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config" (OuterVolumeSpecName: "config") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.535254 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.535691 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.539299 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-kube-api-access-qhr9q" (OuterVolumeSpecName: "kube-api-access-qhr9q") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "kube-api-access-qhr9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.539839 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-token" (OuterVolumeSpecName: "collector-token") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.540428 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-metrics" (OuterVolumeSpecName: "metrics") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.541496 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b0e37c-d482-4c51-903c-a2fb823c22eb-tmp" (OuterVolumeSpecName: "tmp") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.542866 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.544340 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-sa-token" (OuterVolumeSpecName: "sa-token") pod "87b0e37c-d482-4c51-903c-a2fb823c22eb" (UID: "87b0e37c-d482-4c51-903c-a2fb823c22eb"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.636478 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.637045 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.637162 4853 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/87b0e37c-d482-4c51-903c-a2fb823c22eb-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.637270 4853 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/87b0e37c-d482-4c51-903c-a2fb823c22eb-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.637355 4853 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.637436 4853 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-token\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.637520 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhr9q\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-kube-api-access-qhr9q\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.637620 4853 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/87b0e37c-d482-4c51-903c-a2fb823c22eb-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:49 crc kubenswrapper[4853]: I1209 17:12:49.637706 4853 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/87b0e37c-d482-4c51-903c-a2fb823c22eb-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.481370 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-nkfx5" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.548050 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-nkfx5"] Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.554758 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-pm7gl"] Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.555968 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.559947 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.560046 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-nb9m2" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.560434 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.560845 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.561173 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.573066 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.575678 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-nkfx5"] Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.582699 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-pm7gl"] Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.656925 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-config\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.656981 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3161e6e-aba7-422a-a2a5-f6384a378672-tmp\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.657045 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-config-openshift-service-cacrt\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.657071 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8895\" (UniqueName: \"kubernetes.io/projected/f3161e6e-aba7-422a-a2a5-f6384a378672-kube-api-access-m8895\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.657124 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f3161e6e-aba7-422a-a2a5-f6384a378672-datadir\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.657150 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-trusted-ca\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.657216 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-entrypoint\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.657261 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f3161e6e-aba7-422a-a2a5-f6384a378672-collector-syslog-receiver\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.657314 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f3161e6e-aba7-422a-a2a5-f6384a378672-sa-token\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.657372 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f3161e6e-aba7-422a-a2a5-f6384a378672-metrics\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.657614 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f3161e6e-aba7-422a-a2a5-f6384a378672-collector-token\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.758907 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-config-openshift-service-cacrt\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.758950 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8895\" (UniqueName: \"kubernetes.io/projected/f3161e6e-aba7-422a-a2a5-f6384a378672-kube-api-access-m8895\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.758986 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f3161e6e-aba7-422a-a2a5-f6384a378672-datadir\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759008 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-trusted-ca\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759042 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-entrypoint\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759063 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f3161e6e-aba7-422a-a2a5-f6384a378672-collector-syslog-receiver\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759086 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f3161e6e-aba7-422a-a2a5-f6384a378672-sa-token\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759130 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f3161e6e-aba7-422a-a2a5-f6384a378672-metrics\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759155 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f3161e6e-aba7-422a-a2a5-f6384a378672-collector-token\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759207 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-config\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759231 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3161e6e-aba7-422a-a2a5-f6384a378672-tmp\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759752 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-config-openshift-service-cacrt\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.759883 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f3161e6e-aba7-422a-a2a5-f6384a378672-datadir\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.760357 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-trusted-ca\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.761714 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-entrypoint\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.762070 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3161e6e-aba7-422a-a2a5-f6384a378672-config\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.763347 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f3161e6e-aba7-422a-a2a5-f6384a378672-collector-token\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.763360 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f3161e6e-aba7-422a-a2a5-f6384a378672-metrics\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.765079 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f3161e6e-aba7-422a-a2a5-f6384a378672-collector-syslog-receiver\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.766743 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f3161e6e-aba7-422a-a2a5-f6384a378672-tmp\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.778399 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8895\" (UniqueName: \"kubernetes.io/projected/f3161e6e-aba7-422a-a2a5-f6384a378672-kube-api-access-m8895\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.780998 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f3161e6e-aba7-422a-a2a5-f6384a378672-sa-token\") pod \"collector-pm7gl\" (UID: \"f3161e6e-aba7-422a-a2a5-f6384a378672\") " pod="openshift-logging/collector-pm7gl" Dec 09 17:12:50 crc kubenswrapper[4853]: I1209 17:12:50.879025 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pm7gl" Dec 09 17:12:51 crc kubenswrapper[4853]: I1209 17:12:51.372451 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-pm7gl"] Dec 09 17:12:51 crc kubenswrapper[4853]: I1209 17:12:51.492080 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-pm7gl" event={"ID":"f3161e6e-aba7-422a-a2a5-f6384a378672","Type":"ContainerStarted","Data":"7c019a76022f0c6d1365936361dd6182bd80bbe468cdd470adfa1163a093912b"} Dec 09 17:12:51 crc kubenswrapper[4853]: I1209 17:12:51.582239 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b0e37c-d482-4c51-903c-a2fb823c22eb" path="/var/lib/kubelet/pods/87b0e37c-d482-4c51-903c-a2fb823c22eb/volumes" Dec 09 17:12:58 crc kubenswrapper[4853]: I1209 17:12:58.593697 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:12:58 crc kubenswrapper[4853]: I1209 17:12:58.594787 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:12:59 crc kubenswrapper[4853]: I1209 17:12:59.555999 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-pm7gl" event={"ID":"f3161e6e-aba7-422a-a2a5-f6384a378672","Type":"ContainerStarted","Data":"c2408673c83366e67f5e0d4ccaebc83aca991fb690c8d00129022e176a3b967d"} Dec 09 17:12:59 crc kubenswrapper[4853]: I1209 17:12:59.591575 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-pm7gl" podStartSLOduration=2.441769412 podStartE2EDuration="9.591537316s" podCreationTimestamp="2025-12-09 17:12:50 +0000 UTC" firstStartedPulling="2025-12-09 17:12:51.372980073 +0000 UTC m=+998.307719285" lastFinishedPulling="2025-12-09 17:12:58.522748007 +0000 UTC m=+1005.457487189" observedRunningTime="2025-12-09 17:12:59.586884207 +0000 UTC m=+1006.521623389" watchObservedRunningTime="2025-12-09 17:12:59.591537316 +0000 UTC m=+1006.526276498" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.110095 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv"] Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.114830 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.125840 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.168342 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv"] Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.286069 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.286145 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt4jl\" (UniqueName: \"kubernetes.io/projected/7a72a4c1-6c6e-4022-ab9b-186a8814affc-kube-api-access-lt4jl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.286171 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.388291 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.388412 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt4jl\" (UniqueName: \"kubernetes.io/projected/7a72a4c1-6c6e-4022-ab9b-186a8814affc-kube-api-access-lt4jl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.388471 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.388898 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.389263 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.412041 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt4jl\" (UniqueName: \"kubernetes.io/projected/7a72a4c1-6c6e-4022-ab9b-186a8814affc-kube-api-access-lt4jl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.439063 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.592669 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.592733 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.753552 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv"] Dec 09 17:13:28 crc kubenswrapper[4853]: I1209 17:13:28.802321 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" event={"ID":"7a72a4c1-6c6e-4022-ab9b-186a8814affc","Type":"ContainerStarted","Data":"07345c976527140b309edec46465e0c316855dbfadcdd7b11e504da559be15c0"} Dec 09 17:13:29 crc kubenswrapper[4853]: I1209 17:13:29.814062 4853 generic.go:334] "Generic (PLEG): container finished" podID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerID="f8036e505b1c14f0f1c04e110da9061c6530aac39ee7bdbe6c85838fc76f79f0" exitCode=0 Dec 09 17:13:29 crc kubenswrapper[4853]: I1209 17:13:29.814177 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" event={"ID":"7a72a4c1-6c6e-4022-ab9b-186a8814affc","Type":"ContainerDied","Data":"f8036e505b1c14f0f1c04e110da9061c6530aac39ee7bdbe6c85838fc76f79f0"} Dec 09 17:13:31 crc kubenswrapper[4853]: I1209 17:13:31.835315 4853 generic.go:334] "Generic (PLEG): container finished" podID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerID="d8186954adc5e1ac46fcbbcfd972dbe00237a1b9f57ffe6f9dd7af247cade356" exitCode=0 Dec 09 17:13:31 crc kubenswrapper[4853]: I1209 17:13:31.835444 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" event={"ID":"7a72a4c1-6c6e-4022-ab9b-186a8814affc","Type":"ContainerDied","Data":"d8186954adc5e1ac46fcbbcfd972dbe00237a1b9f57ffe6f9dd7af247cade356"} Dec 09 17:13:32 crc kubenswrapper[4853]: I1209 17:13:32.849384 4853 generic.go:334] "Generic (PLEG): container finished" podID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerID="dae1afce8b92e038476e6f6c532f3e94bcae9d6700c74f058eb058d0556b5eef" exitCode=0 Dec 09 17:13:32 crc kubenswrapper[4853]: I1209 17:13:32.851060 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" event={"ID":"7a72a4c1-6c6e-4022-ab9b-186a8814affc","Type":"ContainerDied","Data":"dae1afce8b92e038476e6f6c532f3e94bcae9d6700c74f058eb058d0556b5eef"} Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.143301 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.300165 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt4jl\" (UniqueName: \"kubernetes.io/projected/7a72a4c1-6c6e-4022-ab9b-186a8814affc-kube-api-access-lt4jl\") pod \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.300309 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-bundle\") pod \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.300462 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-util\") pod \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\" (UID: \"7a72a4c1-6c6e-4022-ab9b-186a8814affc\") " Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.301466 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-bundle" (OuterVolumeSpecName: "bundle") pod "7a72a4c1-6c6e-4022-ab9b-186a8814affc" (UID: "7a72a4c1-6c6e-4022-ab9b-186a8814affc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.307887 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a72a4c1-6c6e-4022-ab9b-186a8814affc-kube-api-access-lt4jl" (OuterVolumeSpecName: "kube-api-access-lt4jl") pod "7a72a4c1-6c6e-4022-ab9b-186a8814affc" (UID: "7a72a4c1-6c6e-4022-ab9b-186a8814affc"). InnerVolumeSpecName "kube-api-access-lt4jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.403336 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt4jl\" (UniqueName: \"kubernetes.io/projected/7a72a4c1-6c6e-4022-ab9b-186a8814affc-kube-api-access-lt4jl\") on node \"crc\" DevicePath \"\"" Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.403406 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.512288 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-util" (OuterVolumeSpecName: "util") pod "7a72a4c1-6c6e-4022-ab9b-186a8814affc" (UID: "7a72a4c1-6c6e-4022-ab9b-186a8814affc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.606790 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a72a4c1-6c6e-4022-ab9b-186a8814affc-util\") on node \"crc\" DevicePath \"\"" Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.867403 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" event={"ID":"7a72a4c1-6c6e-4022-ab9b-186a8814affc","Type":"ContainerDied","Data":"07345c976527140b309edec46465e0c316855dbfadcdd7b11e504da559be15c0"} Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.867792 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07345c976527140b309edec46465e0c316855dbfadcdd7b11e504da559be15c0" Dec 09 17:13:34 crc kubenswrapper[4853]: I1209 17:13:34.867460 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv" Dec 09 17:13:40 crc kubenswrapper[4853]: I1209 17:13:40.991331 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m"] Dec 09 17:13:40 crc kubenswrapper[4853]: E1209 17:13:40.992205 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerName="extract" Dec 09 17:13:40 crc kubenswrapper[4853]: I1209 17:13:40.992223 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerName="extract" Dec 09 17:13:40 crc kubenswrapper[4853]: E1209 17:13:40.992237 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerName="util" Dec 09 17:13:40 crc kubenswrapper[4853]: I1209 17:13:40.992245 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerName="util" Dec 09 17:13:40 crc kubenswrapper[4853]: E1209 17:13:40.992263 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerName="pull" Dec 09 17:13:40 crc kubenswrapper[4853]: I1209 17:13:40.992270 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerName="pull" Dec 09 17:13:40 crc kubenswrapper[4853]: I1209 17:13:40.992409 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a72a4c1-6c6e-4022-ab9b-186a8814affc" containerName="extract" Dec 09 17:13:40 crc kubenswrapper[4853]: I1209 17:13:40.992999 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m" Dec 09 17:13:40 crc kubenswrapper[4853]: I1209 17:13:40.995989 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 09 17:13:40 crc kubenswrapper[4853]: I1209 17:13:40.996155 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 09 17:13:40 crc kubenswrapper[4853]: I1209 17:13:40.996217 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bcks4" Dec 09 17:13:41 crc kubenswrapper[4853]: I1209 17:13:41.007004 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m"] Dec 09 17:13:41 crc kubenswrapper[4853]: I1209 17:13:41.012469 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jj5\" (UniqueName: \"kubernetes.io/projected/b2e89fb2-6e5f-4074-898b-fe3cca63994d-kube-api-access-r9jj5\") pod \"nmstate-operator-5b5b58f5c8-6br2m\" (UID: \"b2e89fb2-6e5f-4074-898b-fe3cca63994d\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m" Dec 09 17:13:41 crc kubenswrapper[4853]: I1209 17:13:41.114170 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9jj5\" (UniqueName: \"kubernetes.io/projected/b2e89fb2-6e5f-4074-898b-fe3cca63994d-kube-api-access-r9jj5\") pod \"nmstate-operator-5b5b58f5c8-6br2m\" (UID: \"b2e89fb2-6e5f-4074-898b-fe3cca63994d\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m" Dec 09 17:13:41 crc kubenswrapper[4853]: I1209 17:13:41.138846 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9jj5\" (UniqueName: \"kubernetes.io/projected/b2e89fb2-6e5f-4074-898b-fe3cca63994d-kube-api-access-r9jj5\") pod \"nmstate-operator-5b5b58f5c8-6br2m\" (UID: \"b2e89fb2-6e5f-4074-898b-fe3cca63994d\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m" Dec 09 17:13:41 crc kubenswrapper[4853]: I1209 17:13:41.316588 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m" Dec 09 17:13:41 crc kubenswrapper[4853]: I1209 17:13:41.765139 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m"] Dec 09 17:13:41 crc kubenswrapper[4853]: I1209 17:13:41.939403 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m" event={"ID":"b2e89fb2-6e5f-4074-898b-fe3cca63994d","Type":"ContainerStarted","Data":"58682bbe7de09c8d918014e7a95c16c71d2b02ae9c994937ae60b46893465219"} Dec 09 17:13:45 crc kubenswrapper[4853]: I1209 17:13:45.066759 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m" event={"ID":"b2e89fb2-6e5f-4074-898b-fe3cca63994d","Type":"ContainerStarted","Data":"94c969ece668df3b2c4dd06e3d6d8e5a78d2a6fb4384a61ced87e46f0f70354c"} Dec 09 17:13:45 crc kubenswrapper[4853]: I1209 17:13:45.087017 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6br2m" podStartSLOduration=2.377255347 podStartE2EDuration="5.08699224s" podCreationTimestamp="2025-12-09 17:13:40 +0000 UTC" firstStartedPulling="2025-12-09 17:13:41.773635733 +0000 UTC m=+1048.708374915" lastFinishedPulling="2025-12-09 17:13:44.483372626 +0000 UTC m=+1051.418111808" observedRunningTime="2025-12-09 17:13:45.08263005 +0000 UTC m=+1052.017369242" watchObservedRunningTime="2025-12-09 17:13:45.08699224 +0000 UTC m=+1052.021731432" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.214374 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8"] Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.215876 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.219140 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-nfch5" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.231788 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8"] Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.237727 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb"] Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.238628 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.244649 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.246667 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb"] Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.295281 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bb6s6"] Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.296438 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.326754 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/535a4202-4d27-4167-9297-c2309fe99da9-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-rphkb\" (UID: \"535a4202-4d27-4167-9297-c2309fe99da9\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.326831 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6kn\" (UniqueName: \"kubernetes.io/projected/fb495110-ff30-42c3-89ef-ddcee729c1bf-kube-api-access-jh6kn\") pod \"nmstate-metrics-7f946cbc9-fkxp8\" (UID: \"fb495110-ff30-42c3-89ef-ddcee729c1bf\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.326889 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvz7x\" (UniqueName: \"kubernetes.io/projected/535a4202-4d27-4167-9297-c2309fe99da9-kube-api-access-gvz7x\") pod \"nmstate-webhook-5f6d4c5ccb-rphkb\" (UID: \"535a4202-4d27-4167-9297-c2309fe99da9\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.379253 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz"] Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.380151 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.381999 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.396341 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-shjzz" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.397664 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.400357 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz"] Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.428485 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxr5\" (UniqueName: \"kubernetes.io/projected/449b3cb4-bb3a-480c-8a62-6b36c111037e-kube-api-access-8bxr5\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.428795 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvz7x\" (UniqueName: \"kubernetes.io/projected/535a4202-4d27-4167-9297-c2309fe99da9-kube-api-access-gvz7x\") pod \"nmstate-webhook-5f6d4c5ccb-rphkb\" (UID: \"535a4202-4d27-4167-9297-c2309fe99da9\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.428935 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/449b3cb4-bb3a-480c-8a62-6b36c111037e-ovs-socket\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.429114 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/535a4202-4d27-4167-9297-c2309fe99da9-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-rphkb\" (UID: \"535a4202-4d27-4167-9297-c2309fe99da9\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.429234 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/449b3cb4-bb3a-480c-8a62-6b36c111037e-dbus-socket\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.429375 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6kn\" (UniqueName: \"kubernetes.io/projected/fb495110-ff30-42c3-89ef-ddcee729c1bf-kube-api-access-jh6kn\") pod \"nmstate-metrics-7f946cbc9-fkxp8\" (UID: \"fb495110-ff30-42c3-89ef-ddcee729c1bf\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.429495 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/449b3cb4-bb3a-480c-8a62-6b36c111037e-nmstate-lock\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: E1209 17:13:49.429278 4853 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 09 17:13:49 crc kubenswrapper[4853]: E1209 17:13:49.429681 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/535a4202-4d27-4167-9297-c2309fe99da9-tls-key-pair podName:535a4202-4d27-4167-9297-c2309fe99da9 nodeName:}" failed. No retries permitted until 2025-12-09 17:13:49.929662356 +0000 UTC m=+1056.864401628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/535a4202-4d27-4167-9297-c2309fe99da9-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-rphkb" (UID: "535a4202-4d27-4167-9297-c2309fe99da9") : secret "openshift-nmstate-webhook" not found Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.454237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6kn\" (UniqueName: \"kubernetes.io/projected/fb495110-ff30-42c3-89ef-ddcee729c1bf-kube-api-access-jh6kn\") pod \"nmstate-metrics-7f946cbc9-fkxp8\" (UID: \"fb495110-ff30-42c3-89ef-ddcee729c1bf\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.454261 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvz7x\" (UniqueName: \"kubernetes.io/projected/535a4202-4d27-4167-9297-c2309fe99da9-kube-api-access-gvz7x\") pod \"nmstate-webhook-5f6d4c5ccb-rphkb\" (UID: \"535a4202-4d27-4167-9297-c2309fe99da9\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.530845 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/449b3cb4-bb3a-480c-8a62-6b36c111037e-ovs-socket\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.530898 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/648c71b4-40ba-4038-8779-b4971316abda-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.530918 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/648c71b4-40ba-4038-8779-b4971316abda-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.530989 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/449b3cb4-bb3a-480c-8a62-6b36c111037e-dbus-socket\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.531030 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5mtl\" (UniqueName: \"kubernetes.io/projected/648c71b4-40ba-4038-8779-b4971316abda-kube-api-access-k5mtl\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.531058 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/449b3cb4-bb3a-480c-8a62-6b36c111037e-nmstate-lock\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.531079 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxr5\" (UniqueName: \"kubernetes.io/projected/449b3cb4-bb3a-480c-8a62-6b36c111037e-kube-api-access-8bxr5\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.531186 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/449b3cb4-bb3a-480c-8a62-6b36c111037e-ovs-socket\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.531262 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/449b3cb4-bb3a-480c-8a62-6b36c111037e-nmstate-lock\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.531405 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/449b3cb4-bb3a-480c-8a62-6b36c111037e-dbus-socket\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.531526 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.555559 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxr5\" (UniqueName: \"kubernetes.io/projected/449b3cb4-bb3a-480c-8a62-6b36c111037e-kube-api-access-8bxr5\") pod \"nmstate-handler-bb6s6\" (UID: \"449b3cb4-bb3a-480c-8a62-6b36c111037e\") " pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.583923 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-658f7d5d7b-lbkxw"] Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.584805 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.598051 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-658f7d5d7b-lbkxw"] Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.617972 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.631953 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5mtl\" (UniqueName: \"kubernetes.io/projected/648c71b4-40ba-4038-8779-b4971316abda-kube-api-access-k5mtl\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.632249 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/648c71b4-40ba-4038-8779-b4971316abda-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.632347 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/648c71b4-40ba-4038-8779-b4971316abda-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:49 crc kubenswrapper[4853]: E1209 17:13:49.632539 4853 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.633829 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/648c71b4-40ba-4038-8779-b4971316abda-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:49 crc kubenswrapper[4853]: E1209 17:13:49.633915 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648c71b4-40ba-4038-8779-b4971316abda-plugin-serving-cert podName:648c71b4-40ba-4038-8779-b4971316abda nodeName:}" failed. No retries permitted until 2025-12-09 17:13:50.13387245 +0000 UTC m=+1057.068611632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/648c71b4-40ba-4038-8779-b4971316abda-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-642fz" (UID: "648c71b4-40ba-4038-8779-b4971316abda") : secret "plugin-serving-cert" not found Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.654369 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5mtl\" (UniqueName: \"kubernetes.io/projected/648c71b4-40ba-4038-8779-b4971316abda-kube-api-access-k5mtl\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.736405 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-trusted-ca-bundle\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.736468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz6qh\" (UniqueName: \"kubernetes.io/projected/511cc5d6-eccf-443e-92c3-8b0af97b904a-kube-api-access-mz6qh\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.736551 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-service-ca\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.736584 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-oauth-config\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.736618 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-oauth-serving-cert\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.736638 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-config\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.736898 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-serving-cert\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.838213 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-serving-cert\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.838516 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-trusted-ca-bundle\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.838548 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz6qh\" (UniqueName: \"kubernetes.io/projected/511cc5d6-eccf-443e-92c3-8b0af97b904a-kube-api-access-mz6qh\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.838641 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-service-ca\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.838664 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-oauth-config\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.838683 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-oauth-serving-cert\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.838704 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-config\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.839456 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-config\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.840054 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-service-ca\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.840898 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-oauth-serving-cert\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.846454 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-oauth-config\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.850230 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-trusted-ca-bundle\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.850983 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-serving-cert\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.859424 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz6qh\" (UniqueName: \"kubernetes.io/projected/511cc5d6-eccf-443e-92c3-8b0af97b904a-kube-api-access-mz6qh\") pod \"console-658f7d5d7b-lbkxw\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.914243 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.940614 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/535a4202-4d27-4167-9297-c2309fe99da9-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-rphkb\" (UID: \"535a4202-4d27-4167-9297-c2309fe99da9\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:49 crc kubenswrapper[4853]: I1209 17:13:49.944481 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/535a4202-4d27-4167-9297-c2309fe99da9-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-rphkb\" (UID: \"535a4202-4d27-4167-9297-c2309fe99da9\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:50 crc kubenswrapper[4853]: I1209 17:13:50.111143 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bb6s6" event={"ID":"449b3cb4-bb3a-480c-8a62-6b36c111037e","Type":"ContainerStarted","Data":"d270b87244ecc5557dd3c643607a3896642e0a6a157eba54ed2153d360ac54b5"} Dec 09 17:13:50 crc kubenswrapper[4853]: I1209 17:13:50.145784 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/648c71b4-40ba-4038-8779-b4971316abda-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:50 crc kubenswrapper[4853]: I1209 17:13:50.155283 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:50 crc kubenswrapper[4853]: I1209 17:13:50.156093 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/648c71b4-40ba-4038-8779-b4971316abda-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-642fz\" (UID: \"648c71b4-40ba-4038-8779-b4971316abda\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:50 crc kubenswrapper[4853]: I1209 17:13:50.190438 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8"] Dec 09 17:13:50 crc kubenswrapper[4853]: I1209 17:13:50.298229 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" Dec 09 17:13:50 crc kubenswrapper[4853]: I1209 17:13:50.408077 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-658f7d5d7b-lbkxw"] Dec 09 17:13:50 crc kubenswrapper[4853]: W1209 17:13:50.425173 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod511cc5d6_eccf_443e_92c3_8b0af97b904a.slice/crio-aa18419ddd9d0765a08c0550dd9bfaca19e12ee2788b9dc80f58198a38115225 WatchSource:0}: Error finding container aa18419ddd9d0765a08c0550dd9bfaca19e12ee2788b9dc80f58198a38115225: Status 404 returned error can't find the container with id aa18419ddd9d0765a08c0550dd9bfaca19e12ee2788b9dc80f58198a38115225 Dec 09 17:13:50 crc kubenswrapper[4853]: I1209 17:13:50.566954 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb"] Dec 09 17:13:50 crc kubenswrapper[4853]: I1209 17:13:50.806425 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz"] Dec 09 17:13:50 crc kubenswrapper[4853]: W1209 17:13:50.813821 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod648c71b4_40ba_4038_8779_b4971316abda.slice/crio-8ce190a4b8c775e77ef78fd9cabd6555c783bd222da5b7f487b46d33f3b98577 WatchSource:0}: Error finding container 8ce190a4b8c775e77ef78fd9cabd6555c783bd222da5b7f487b46d33f3b98577: Status 404 returned error can't find the container with id 8ce190a4b8c775e77ef78fd9cabd6555c783bd222da5b7f487b46d33f3b98577 Dec 09 17:13:51 crc kubenswrapper[4853]: I1209 17:13:51.123545 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" event={"ID":"535a4202-4d27-4167-9297-c2309fe99da9","Type":"ContainerStarted","Data":"6898a19b8d90e584d2bbaa04d3d2265e4221847cf87fe9812f0d7c5fc4554b7e"} Dec 09 17:13:51 crc kubenswrapper[4853]: I1209 17:13:51.125473 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8" event={"ID":"fb495110-ff30-42c3-89ef-ddcee729c1bf","Type":"ContainerStarted","Data":"f5fa18ea2df5739e760e31bee19688198bae9255333e5cb1eddb918aa49a8c8b"} Dec 09 17:13:51 crc kubenswrapper[4853]: I1209 17:13:51.127756 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-658f7d5d7b-lbkxw" event={"ID":"511cc5d6-eccf-443e-92c3-8b0af97b904a","Type":"ContainerStarted","Data":"93272ba077b8ccfdf9f1ffda9af50df1405300d2633303a3a565e16f2de6ec4d"} Dec 09 17:13:51 crc kubenswrapper[4853]: I1209 17:13:51.127797 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-658f7d5d7b-lbkxw" event={"ID":"511cc5d6-eccf-443e-92c3-8b0af97b904a","Type":"ContainerStarted","Data":"aa18419ddd9d0765a08c0550dd9bfaca19e12ee2788b9dc80f58198a38115225"} Dec 09 17:13:51 crc kubenswrapper[4853]: I1209 17:13:51.131561 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" event={"ID":"648c71b4-40ba-4038-8779-b4971316abda","Type":"ContainerStarted","Data":"8ce190a4b8c775e77ef78fd9cabd6555c783bd222da5b7f487b46d33f3b98577"} Dec 09 17:13:51 crc kubenswrapper[4853]: I1209 17:13:51.156905 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-658f7d5d7b-lbkxw" podStartSLOduration=2.156884731 podStartE2EDuration="2.156884731s" podCreationTimestamp="2025-12-09 17:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:13:51.151568954 +0000 UTC m=+1058.086308166" watchObservedRunningTime="2025-12-09 17:13:51.156884731 +0000 UTC m=+1058.091623913" Dec 09 17:13:54 crc kubenswrapper[4853]: I1209 17:13:54.171528 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8" event={"ID":"fb495110-ff30-42c3-89ef-ddcee729c1bf","Type":"ContainerStarted","Data":"66321a2c66c2c59970290f5707bfe8acbe58ecce70434a0d90f89f215f0bf9cb"} Dec 09 17:13:54 crc kubenswrapper[4853]: I1209 17:13:54.174063 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bb6s6" event={"ID":"449b3cb4-bb3a-480c-8a62-6b36c111037e","Type":"ContainerStarted","Data":"76b3812df058599d7dd8e98f2105fcfa8068e7084cd2583607f6e4d73ff8863a"} Dec 09 17:13:54 crc kubenswrapper[4853]: I1209 17:13:54.174302 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:54 crc kubenswrapper[4853]: I1209 17:13:54.196914 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bb6s6" podStartSLOduration=1.7803926730000001 podStartE2EDuration="5.196893246s" podCreationTimestamp="2025-12-09 17:13:49 +0000 UTC" firstStartedPulling="2025-12-09 17:13:49.684711943 +0000 UTC m=+1056.619451125" lastFinishedPulling="2025-12-09 17:13:53.101212516 +0000 UTC m=+1060.035951698" observedRunningTime="2025-12-09 17:13:54.194642234 +0000 UTC m=+1061.129381416" watchObservedRunningTime="2025-12-09 17:13:54.196893246 +0000 UTC m=+1061.131632438" Dec 09 17:13:55 crc kubenswrapper[4853]: I1209 17:13:55.181694 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" event={"ID":"535a4202-4d27-4167-9297-c2309fe99da9","Type":"ContainerStarted","Data":"989f73a7d0d31ca2f93836c6b6f78189d4be80a77e21fa207d22b57cced134b3"} Dec 09 17:13:55 crc kubenswrapper[4853]: I1209 17:13:55.182050 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:13:55 crc kubenswrapper[4853]: I1209 17:13:55.183270 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" event={"ID":"648c71b4-40ba-4038-8779-b4971316abda","Type":"ContainerStarted","Data":"3715cec10862c16b2823524a2b5399f42f87c4684b53fba20fd276729c300653"} Dec 09 17:13:55 crc kubenswrapper[4853]: I1209 17:13:55.204342 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" podStartSLOduration=2.100011441 podStartE2EDuration="6.20432005s" podCreationTimestamp="2025-12-09 17:13:49 +0000 UTC" firstStartedPulling="2025-12-09 17:13:50.572473637 +0000 UTC m=+1057.507212809" lastFinishedPulling="2025-12-09 17:13:54.676782236 +0000 UTC m=+1061.611521418" observedRunningTime="2025-12-09 17:13:55.194260083 +0000 UTC m=+1062.128999285" watchObservedRunningTime="2025-12-09 17:13:55.20432005 +0000 UTC m=+1062.139059232" Dec 09 17:13:55 crc kubenswrapper[4853]: I1209 17:13:55.232744 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-642fz" podStartSLOduration=2.402014415 podStartE2EDuration="6.232718565s" podCreationTimestamp="2025-12-09 17:13:49 +0000 UTC" firstStartedPulling="2025-12-09 17:13:50.817761325 +0000 UTC m=+1057.752500507" lastFinishedPulling="2025-12-09 17:13:54.648465475 +0000 UTC m=+1061.583204657" observedRunningTime="2025-12-09 17:13:55.219214992 +0000 UTC m=+1062.153954184" watchObservedRunningTime="2025-12-09 17:13:55.232718565 +0000 UTC m=+1062.167457747" Dec 09 17:13:56 crc kubenswrapper[4853]: I1209 17:13:56.195758 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8" event={"ID":"fb495110-ff30-42c3-89ef-ddcee729c1bf","Type":"ContainerStarted","Data":"7bae6738cc42eb13818bc9864e08fbb8c39655d14dc35d72ddb8c87c002a3a97"} Dec 09 17:13:56 crc kubenswrapper[4853]: I1209 17:13:56.224641 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fkxp8" podStartSLOduration=1.4017544960000001 podStartE2EDuration="7.224581571s" podCreationTimestamp="2025-12-09 17:13:49 +0000 UTC" firstStartedPulling="2025-12-09 17:13:50.192231515 +0000 UTC m=+1057.126970717" lastFinishedPulling="2025-12-09 17:13:56.01505861 +0000 UTC m=+1062.949797792" observedRunningTime="2025-12-09 17:13:56.21226133 +0000 UTC m=+1063.147000512" watchObservedRunningTime="2025-12-09 17:13:56.224581571 +0000 UTC m=+1063.159320793" Dec 09 17:13:58 crc kubenswrapper[4853]: I1209 17:13:58.593541 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:13:58 crc kubenswrapper[4853]: I1209 17:13:58.594129 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:13:58 crc kubenswrapper[4853]: I1209 17:13:58.594177 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:13:58 crc kubenswrapper[4853]: I1209 17:13:58.594912 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f1e13e4d459d808e60b6045bc9ee20eb1af70f1e10bf78d11a94776b32f36e9"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:13:58 crc kubenswrapper[4853]: I1209 17:13:58.594961 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://0f1e13e4d459d808e60b6045bc9ee20eb1af70f1e10bf78d11a94776b32f36e9" gracePeriod=600 Dec 09 17:13:59 crc kubenswrapper[4853]: I1209 17:13:59.219853 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="0f1e13e4d459d808e60b6045bc9ee20eb1af70f1e10bf78d11a94776b32f36e9" exitCode=0 Dec 09 17:13:59 crc kubenswrapper[4853]: I1209 17:13:59.219917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"0f1e13e4d459d808e60b6045bc9ee20eb1af70f1e10bf78d11a94776b32f36e9"} Dec 09 17:13:59 crc kubenswrapper[4853]: I1209 17:13:59.220117 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"ff410bbb47eb0d8e5f80ec7cc8ea558647698f94ee7441c8421ab12f2216ccf7"} Dec 09 17:13:59 crc kubenswrapper[4853]: I1209 17:13:59.220147 4853 scope.go:117] "RemoveContainer" containerID="8a118dfb72dd6cb3a09eba31a0c1c9fb6b48c60ceaaef13411d332f6c915d49b" Dec 09 17:13:59 crc kubenswrapper[4853]: I1209 17:13:59.646099 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bb6s6" Dec 09 17:13:59 crc kubenswrapper[4853]: I1209 17:13:59.915366 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:59 crc kubenswrapper[4853]: I1209 17:13:59.915427 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:13:59 crc kubenswrapper[4853]: I1209 17:13:59.924834 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:14:00 crc kubenswrapper[4853]: I1209 17:14:00.234266 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:14:00 crc kubenswrapper[4853]: I1209 17:14:00.284310 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bfcc5b469-tzkw9"] Dec 09 17:14:10 crc kubenswrapper[4853]: I1209 17:14:10.163550 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-rphkb" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.346395 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6bfcc5b469-tzkw9" podUID="e705ec5e-fd22-4050-be73-f79ec4c45320" containerName="console" containerID="cri-o://59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404" gracePeriod=15 Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.774650 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bfcc5b469-tzkw9_e705ec5e-fd22-4050-be73-f79ec4c45320/console/0.log" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.775144 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.872918 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-trusted-ca-bundle\") pod \"e705ec5e-fd22-4050-be73-f79ec4c45320\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.872970 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-oauth-config\") pod \"e705ec5e-fd22-4050-be73-f79ec4c45320\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.873006 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-serving-cert\") pod \"e705ec5e-fd22-4050-be73-f79ec4c45320\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.873057 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-console-config\") pod \"e705ec5e-fd22-4050-be73-f79ec4c45320\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.873099 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-service-ca\") pod \"e705ec5e-fd22-4050-be73-f79ec4c45320\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.873152 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgl87\" (UniqueName: \"kubernetes.io/projected/e705ec5e-fd22-4050-be73-f79ec4c45320-kube-api-access-zgl87\") pod \"e705ec5e-fd22-4050-be73-f79ec4c45320\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.873188 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-oauth-serving-cert\") pod \"e705ec5e-fd22-4050-be73-f79ec4c45320\" (UID: \"e705ec5e-fd22-4050-be73-f79ec4c45320\") " Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.874497 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e705ec5e-fd22-4050-be73-f79ec4c45320" (UID: "e705ec5e-fd22-4050-be73-f79ec4c45320"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.874857 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-service-ca" (OuterVolumeSpecName: "service-ca") pod "e705ec5e-fd22-4050-be73-f79ec4c45320" (UID: "e705ec5e-fd22-4050-be73-f79ec4c45320"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.875140 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-console-config" (OuterVolumeSpecName: "console-config") pod "e705ec5e-fd22-4050-be73-f79ec4c45320" (UID: "e705ec5e-fd22-4050-be73-f79ec4c45320"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.875230 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e705ec5e-fd22-4050-be73-f79ec4c45320" (UID: "e705ec5e-fd22-4050-be73-f79ec4c45320"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.880823 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e705ec5e-fd22-4050-be73-f79ec4c45320-kube-api-access-zgl87" (OuterVolumeSpecName: "kube-api-access-zgl87") pod "e705ec5e-fd22-4050-be73-f79ec4c45320" (UID: "e705ec5e-fd22-4050-be73-f79ec4c45320"). InnerVolumeSpecName "kube-api-access-zgl87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.881101 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e705ec5e-fd22-4050-be73-f79ec4c45320" (UID: "e705ec5e-fd22-4050-be73-f79ec4c45320"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.881173 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e705ec5e-fd22-4050-be73-f79ec4c45320" (UID: "e705ec5e-fd22-4050-be73-f79ec4c45320"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.974476 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.974513 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.974523 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.974535 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e705ec5e-fd22-4050-be73-f79ec4c45320-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.974546 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.974647 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e705ec5e-fd22-4050-be73-f79ec4c45320-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:25 crc kubenswrapper[4853]: I1209 17:14:25.974662 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgl87\" (UniqueName: \"kubernetes.io/projected/e705ec5e-fd22-4050-be73-f79ec4c45320-kube-api-access-zgl87\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.451467 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bfcc5b469-tzkw9_e705ec5e-fd22-4050-be73-f79ec4c45320/console/0.log" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.451811 4853 generic.go:334] "Generic (PLEG): container finished" podID="e705ec5e-fd22-4050-be73-f79ec4c45320" containerID="59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404" exitCode=2 Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.451860 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bfcc5b469-tzkw9" event={"ID":"e705ec5e-fd22-4050-be73-f79ec4c45320","Type":"ContainerDied","Data":"59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404"} Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.451928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bfcc5b469-tzkw9" event={"ID":"e705ec5e-fd22-4050-be73-f79ec4c45320","Type":"ContainerDied","Data":"d0b82a5e731565551076495376755e1bd17055d6c2de4d02104003becef22c4c"} Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.451952 4853 scope.go:117] "RemoveContainer" containerID="59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.451876 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bfcc5b469-tzkw9" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.474832 4853 scope.go:117] "RemoveContainer" containerID="59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404" Dec 09 17:14:26 crc kubenswrapper[4853]: E1209 17:14:26.476953 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404\": container with ID starting with 59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404 not found: ID does not exist" containerID="59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.477002 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404"} err="failed to get container status \"59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404\": rpc error: code = NotFound desc = could not find container \"59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404\": container with ID starting with 59a8b6ef02131bc21f1b7429a099d7df7bf912eb382ca5453a513fc57819b404 not found: ID does not exist" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.484054 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bfcc5b469-tzkw9"] Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.489914 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6bfcc5b469-tzkw9"] Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.746486 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s"] Dec 09 17:14:26 crc kubenswrapper[4853]: E1209 17:14:26.746875 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e705ec5e-fd22-4050-be73-f79ec4c45320" containerName="console" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.746899 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e705ec5e-fd22-4050-be73-f79ec4c45320" containerName="console" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.747092 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e705ec5e-fd22-4050-be73-f79ec4c45320" containerName="console" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.748348 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.751540 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.768536 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s"] Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.786585 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.786723 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8j2p\" (UniqueName: \"kubernetes.io/projected/3d92b1b6-cc38-4ead-893a-69afbb6e6786-kube-api-access-c8j2p\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.786763 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.888449 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8j2p\" (UniqueName: \"kubernetes.io/projected/3d92b1b6-cc38-4ead-893a-69afbb6e6786-kube-api-access-c8j2p\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.888527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.888639 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.889440 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.889967 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:26 crc kubenswrapper[4853]: I1209 17:14:26.916465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8j2p\" (UniqueName: \"kubernetes.io/projected/3d92b1b6-cc38-4ead-893a-69afbb6e6786-kube-api-access-c8j2p\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:27 crc kubenswrapper[4853]: I1209 17:14:27.067964 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:27 crc kubenswrapper[4853]: I1209 17:14:27.326192 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s"] Dec 09 17:14:27 crc kubenswrapper[4853]: I1209 17:14:27.470706 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" event={"ID":"3d92b1b6-cc38-4ead-893a-69afbb6e6786","Type":"ContainerStarted","Data":"592aabdb8bd408c6817125e0bab016a6b0bab132ed5b1cabaf1ebd39dc9a7985"} Dec 09 17:14:27 crc kubenswrapper[4853]: I1209 17:14:27.577496 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e705ec5e-fd22-4050-be73-f79ec4c45320" path="/var/lib/kubelet/pods/e705ec5e-fd22-4050-be73-f79ec4c45320/volumes" Dec 09 17:14:28 crc kubenswrapper[4853]: I1209 17:14:28.479417 4853 generic.go:334] "Generic (PLEG): container finished" podID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerID="6ed4090fe3b01af314fd02f6694655f4f72eb3025bd5b6ef8cdf6e89e1e86298" exitCode=0 Dec 09 17:14:28 crc kubenswrapper[4853]: I1209 17:14:28.479475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" event={"ID":"3d92b1b6-cc38-4ead-893a-69afbb6e6786","Type":"ContainerDied","Data":"6ed4090fe3b01af314fd02f6694655f4f72eb3025bd5b6ef8cdf6e89e1e86298"} Dec 09 17:14:28 crc kubenswrapper[4853]: I1209 17:14:28.482657 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:14:30 crc kubenswrapper[4853]: I1209 17:14:30.497017 4853 generic.go:334] "Generic (PLEG): container finished" podID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerID="f97a267f1b8ae79b51c3db5cc599cfa54cce3a4450363dcf5ecb5098a005add4" exitCode=0 Dec 09 17:14:30 crc kubenswrapper[4853]: I1209 17:14:30.497058 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" event={"ID":"3d92b1b6-cc38-4ead-893a-69afbb6e6786","Type":"ContainerDied","Data":"f97a267f1b8ae79b51c3db5cc599cfa54cce3a4450363dcf5ecb5098a005add4"} Dec 09 17:14:31 crc kubenswrapper[4853]: I1209 17:14:31.524685 4853 generic.go:334] "Generic (PLEG): container finished" podID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerID="9d75e763332d6fa499082971305d8d79f4f2356d5f1c7d2fb3b97eae7fe89eeb" exitCode=0 Dec 09 17:14:31 crc kubenswrapper[4853]: I1209 17:14:31.525048 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" event={"ID":"3d92b1b6-cc38-4ead-893a-69afbb6e6786","Type":"ContainerDied","Data":"9d75e763332d6fa499082971305d8d79f4f2356d5f1c7d2fb3b97eae7fe89eeb"} Dec 09 17:14:32 crc kubenswrapper[4853]: I1209 17:14:32.861834 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:32 crc kubenswrapper[4853]: I1209 17:14:32.981456 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-bundle\") pod \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " Dec 09 17:14:32 crc kubenswrapper[4853]: I1209 17:14:32.981834 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-util\") pod \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " Dec 09 17:14:32 crc kubenswrapper[4853]: I1209 17:14:32.981864 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8j2p\" (UniqueName: \"kubernetes.io/projected/3d92b1b6-cc38-4ead-893a-69afbb6e6786-kube-api-access-c8j2p\") pod \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\" (UID: \"3d92b1b6-cc38-4ead-893a-69afbb6e6786\") " Dec 09 17:14:32 crc kubenswrapper[4853]: I1209 17:14:32.994582 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-util" (OuterVolumeSpecName: "util") pod "3d92b1b6-cc38-4ead-893a-69afbb6e6786" (UID: "3d92b1b6-cc38-4ead-893a-69afbb6e6786"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:14:32 crc kubenswrapper[4853]: I1209 17:14:32.995149 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-bundle" (OuterVolumeSpecName: "bundle") pod "3d92b1b6-cc38-4ead-893a-69afbb6e6786" (UID: "3d92b1b6-cc38-4ead-893a-69afbb6e6786"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:14:32 crc kubenswrapper[4853]: I1209 17:14:32.998548 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d92b1b6-cc38-4ead-893a-69afbb6e6786-kube-api-access-c8j2p" (OuterVolumeSpecName: "kube-api-access-c8j2p") pod "3d92b1b6-cc38-4ead-893a-69afbb6e6786" (UID: "3d92b1b6-cc38-4ead-893a-69afbb6e6786"). InnerVolumeSpecName "kube-api-access-c8j2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:14:33 crc kubenswrapper[4853]: I1209 17:14:33.083749 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:33 crc kubenswrapper[4853]: I1209 17:14:33.083777 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d92b1b6-cc38-4ead-893a-69afbb6e6786-util\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:33 crc kubenswrapper[4853]: I1209 17:14:33.083787 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8j2p\" (UniqueName: \"kubernetes.io/projected/3d92b1b6-cc38-4ead-893a-69afbb6e6786-kube-api-access-c8j2p\") on node \"crc\" DevicePath \"\"" Dec 09 17:14:33 crc kubenswrapper[4853]: I1209 17:14:33.543718 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" event={"ID":"3d92b1b6-cc38-4ead-893a-69afbb6e6786","Type":"ContainerDied","Data":"592aabdb8bd408c6817125e0bab016a6b0bab132ed5b1cabaf1ebd39dc9a7985"} Dec 09 17:14:33 crc kubenswrapper[4853]: I1209 17:14:33.543769 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="592aabdb8bd408c6817125e0bab016a6b0bab132ed5b1cabaf1ebd39dc9a7985" Dec 09 17:14:33 crc kubenswrapper[4853]: I1209 17:14:33.543827 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.644040 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl"] Dec 09 17:14:45 crc kubenswrapper[4853]: E1209 17:14:45.644756 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerName="pull" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.644771 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerName="pull" Dec 09 17:14:45 crc kubenswrapper[4853]: E1209 17:14:45.644780 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerName="util" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.644787 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerName="util" Dec 09 17:14:45 crc kubenswrapper[4853]: E1209 17:14:45.644794 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerName="extract" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.644799 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerName="extract" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.644937 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d92b1b6-cc38-4ead-893a-69afbb6e6786" containerName="extract" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.645403 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.647080 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.647465 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-b9mqw" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.647614 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.648029 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.648219 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.674201 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl"] Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.784800 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drtxm\" (UniqueName: \"kubernetes.io/projected/992aeeb4-58d4-4276-b421-6bb66e4c419d-kube-api-access-drtxm\") pod \"metallb-operator-controller-manager-76f597947c-hpbsl\" (UID: \"992aeeb4-58d4-4276-b421-6bb66e4c419d\") " pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.784870 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/992aeeb4-58d4-4276-b421-6bb66e4c419d-webhook-cert\") pod \"metallb-operator-controller-manager-76f597947c-hpbsl\" (UID: \"992aeeb4-58d4-4276-b421-6bb66e4c419d\") " pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.785035 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/992aeeb4-58d4-4276-b421-6bb66e4c419d-apiservice-cert\") pod \"metallb-operator-controller-manager-76f597947c-hpbsl\" (UID: \"992aeeb4-58d4-4276-b421-6bb66e4c419d\") " pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.886314 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drtxm\" (UniqueName: \"kubernetes.io/projected/992aeeb4-58d4-4276-b421-6bb66e4c419d-kube-api-access-drtxm\") pod \"metallb-operator-controller-manager-76f597947c-hpbsl\" (UID: \"992aeeb4-58d4-4276-b421-6bb66e4c419d\") " pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.886392 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/992aeeb4-58d4-4276-b421-6bb66e4c419d-webhook-cert\") pod \"metallb-operator-controller-manager-76f597947c-hpbsl\" (UID: \"992aeeb4-58d4-4276-b421-6bb66e4c419d\") " pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.886451 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/992aeeb4-58d4-4276-b421-6bb66e4c419d-apiservice-cert\") pod \"metallb-operator-controller-manager-76f597947c-hpbsl\" (UID: \"992aeeb4-58d4-4276-b421-6bb66e4c419d\") " pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.895007 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/992aeeb4-58d4-4276-b421-6bb66e4c419d-webhook-cert\") pod \"metallb-operator-controller-manager-76f597947c-hpbsl\" (UID: \"992aeeb4-58d4-4276-b421-6bb66e4c419d\") " pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.895048 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/992aeeb4-58d4-4276-b421-6bb66e4c419d-apiservice-cert\") pod \"metallb-operator-controller-manager-76f597947c-hpbsl\" (UID: \"992aeeb4-58d4-4276-b421-6bb66e4c419d\") " pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.896970 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq"] Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.897901 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.907204 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4bgd7" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.907211 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.912937 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.916837 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq"] Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.917437 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drtxm\" (UniqueName: \"kubernetes.io/projected/992aeeb4-58d4-4276-b421-6bb66e4c419d-kube-api-access-drtxm\") pod \"metallb-operator-controller-manager-76f597947c-hpbsl\" (UID: \"992aeeb4-58d4-4276-b421-6bb66e4c419d\") " pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:45 crc kubenswrapper[4853]: I1209 17:14:45.964176 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.091402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6-apiservice-cert\") pod \"metallb-operator-webhook-server-75b46f54d-7z4tq\" (UID: \"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6\") " pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.091445 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6-webhook-cert\") pod \"metallb-operator-webhook-server-75b46f54d-7z4tq\" (UID: \"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6\") " pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.091510 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvg2w\" (UniqueName: \"kubernetes.io/projected/2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6-kube-api-access-rvg2w\") pod \"metallb-operator-webhook-server-75b46f54d-7z4tq\" (UID: \"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6\") " pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.193672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6-apiservice-cert\") pod \"metallb-operator-webhook-server-75b46f54d-7z4tq\" (UID: \"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6\") " pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.194093 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6-webhook-cert\") pod \"metallb-operator-webhook-server-75b46f54d-7z4tq\" (UID: \"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6\") " pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.194228 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvg2w\" (UniqueName: \"kubernetes.io/projected/2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6-kube-api-access-rvg2w\") pod \"metallb-operator-webhook-server-75b46f54d-7z4tq\" (UID: \"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6\") " pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.211717 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6-webhook-cert\") pod \"metallb-operator-webhook-server-75b46f54d-7z4tq\" (UID: \"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6\") " pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.212202 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6-apiservice-cert\") pod \"metallb-operator-webhook-server-75b46f54d-7z4tq\" (UID: \"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6\") " pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.243215 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvg2w\" (UniqueName: \"kubernetes.io/projected/2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6-kube-api-access-rvg2w\") pod \"metallb-operator-webhook-server-75b46f54d-7z4tq\" (UID: \"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6\") " pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.256987 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.407066 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl"] Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.668073 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" event={"ID":"992aeeb4-58d4-4276-b421-6bb66e4c419d","Type":"ContainerStarted","Data":"aa541587fb894760c6c514a40fc1f2c68331a134542b47668ac57b7e728f4e77"} Dec 09 17:14:46 crc kubenswrapper[4853]: I1209 17:14:46.761979 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq"] Dec 09 17:14:46 crc kubenswrapper[4853]: W1209 17:14:46.767883 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2974b16e_c9c7_41c5_a8c0_35c5f44cf2f6.slice/crio-7ba35da1731ccf4b63947fe3888c269fdc799a6817b158b315d65632ee829917 WatchSource:0}: Error finding container 7ba35da1731ccf4b63947fe3888c269fdc799a6817b158b315d65632ee829917: Status 404 returned error can't find the container with id 7ba35da1731ccf4b63947fe3888c269fdc799a6817b158b315d65632ee829917 Dec 09 17:14:47 crc kubenswrapper[4853]: I1209 17:14:47.676931 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" event={"ID":"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6","Type":"ContainerStarted","Data":"7ba35da1731ccf4b63947fe3888c269fdc799a6817b158b315d65632ee829917"} Dec 09 17:14:50 crc kubenswrapper[4853]: I1209 17:14:50.706080 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" event={"ID":"992aeeb4-58d4-4276-b421-6bb66e4c419d","Type":"ContainerStarted","Data":"12b91947ac3b026dbc86a57400fed29ee9e41728a14b2931a0c586b7c467d8d8"} Dec 09 17:14:50 crc kubenswrapper[4853]: I1209 17:14:50.706716 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:14:50 crc kubenswrapper[4853]: I1209 17:14:50.739335 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" podStartSLOduration=1.700817277 podStartE2EDuration="5.739318621s" podCreationTimestamp="2025-12-09 17:14:45 +0000 UTC" firstStartedPulling="2025-12-09 17:14:46.444962803 +0000 UTC m=+1113.379701985" lastFinishedPulling="2025-12-09 17:14:50.483464147 +0000 UTC m=+1117.418203329" observedRunningTime="2025-12-09 17:14:50.733881612 +0000 UTC m=+1117.668620794" watchObservedRunningTime="2025-12-09 17:14:50.739318621 +0000 UTC m=+1117.674057803" Dec 09 17:14:52 crc kubenswrapper[4853]: I1209 17:14:52.727897 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" event={"ID":"2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6","Type":"ContainerStarted","Data":"6914cadb061ca9e82f1f92ba1c7b69b272f7e0cbf6360d261854acbc0fff75e8"} Dec 09 17:14:52 crc kubenswrapper[4853]: I1209 17:14:52.728438 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:14:52 crc kubenswrapper[4853]: I1209 17:14:52.748613 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" podStartSLOduration=2.375671987 podStartE2EDuration="7.748576832s" podCreationTimestamp="2025-12-09 17:14:45 +0000 UTC" firstStartedPulling="2025-12-09 17:14:46.771206661 +0000 UTC m=+1113.705945863" lastFinishedPulling="2025-12-09 17:14:52.144111526 +0000 UTC m=+1119.078850708" observedRunningTime="2025-12-09 17:14:52.743143393 +0000 UTC m=+1119.677882585" watchObservedRunningTime="2025-12-09 17:14:52.748576832 +0000 UTC m=+1119.683316024" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.155748 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp"] Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.157496 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.162674 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.163131 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.168794 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp"] Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.245747 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a114a56-f0bf-424e-b627-93415929b182-secret-volume\") pod \"collect-profiles-29421675-9mfwp\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.245846 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5j7g\" (UniqueName: \"kubernetes.io/projected/9a114a56-f0bf-424e-b627-93415929b182-kube-api-access-f5j7g\") pod \"collect-profiles-29421675-9mfwp\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.245893 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a114a56-f0bf-424e-b627-93415929b182-config-volume\") pod \"collect-profiles-29421675-9mfwp\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.347071 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a114a56-f0bf-424e-b627-93415929b182-secret-volume\") pod \"collect-profiles-29421675-9mfwp\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.347122 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5j7g\" (UniqueName: \"kubernetes.io/projected/9a114a56-f0bf-424e-b627-93415929b182-kube-api-access-f5j7g\") pod \"collect-profiles-29421675-9mfwp\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.347141 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a114a56-f0bf-424e-b627-93415929b182-config-volume\") pod \"collect-profiles-29421675-9mfwp\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.348298 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a114a56-f0bf-424e-b627-93415929b182-config-volume\") pod \"collect-profiles-29421675-9mfwp\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.352248 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a114a56-f0bf-424e-b627-93415929b182-secret-volume\") pod \"collect-profiles-29421675-9mfwp\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.387567 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5j7g\" (UniqueName: \"kubernetes.io/projected/9a114a56-f0bf-424e-b627-93415929b182-kube-api-access-f5j7g\") pod \"collect-profiles-29421675-9mfwp\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:00 crc kubenswrapper[4853]: I1209 17:15:00.535632 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:01 crc kubenswrapper[4853]: I1209 17:15:01.083120 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp"] Dec 09 17:15:01 crc kubenswrapper[4853]: I1209 17:15:01.812227 4853 generic.go:334] "Generic (PLEG): container finished" podID="9a114a56-f0bf-424e-b627-93415929b182" containerID="e76db3da77e451294e9b89cf2ccc67a2c556f9bae0dda33b4252cf6a40b042ec" exitCode=0 Dec 09 17:15:01 crc kubenswrapper[4853]: I1209 17:15:01.812297 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" event={"ID":"9a114a56-f0bf-424e-b627-93415929b182","Type":"ContainerDied","Data":"e76db3da77e451294e9b89cf2ccc67a2c556f9bae0dda33b4252cf6a40b042ec"} Dec 09 17:15:01 crc kubenswrapper[4853]: I1209 17:15:01.812486 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" event={"ID":"9a114a56-f0bf-424e-b627-93415929b182","Type":"ContainerStarted","Data":"b7d34f0b95eb644cb6182b5da8b1ab5fe20c15f76ea520fb3aed45abb88bd318"} Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.328282 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.412376 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a114a56-f0bf-424e-b627-93415929b182-config-volume\") pod \"9a114a56-f0bf-424e-b627-93415929b182\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.412460 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5j7g\" (UniqueName: \"kubernetes.io/projected/9a114a56-f0bf-424e-b627-93415929b182-kube-api-access-f5j7g\") pod \"9a114a56-f0bf-424e-b627-93415929b182\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.412628 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a114a56-f0bf-424e-b627-93415929b182-secret-volume\") pod \"9a114a56-f0bf-424e-b627-93415929b182\" (UID: \"9a114a56-f0bf-424e-b627-93415929b182\") " Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.413577 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a114a56-f0bf-424e-b627-93415929b182-config-volume" (OuterVolumeSpecName: "config-volume") pod "9a114a56-f0bf-424e-b627-93415929b182" (UID: "9a114a56-f0bf-424e-b627-93415929b182"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.420295 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a114a56-f0bf-424e-b627-93415929b182-kube-api-access-f5j7g" (OuterVolumeSpecName: "kube-api-access-f5j7g") pod "9a114a56-f0bf-424e-b627-93415929b182" (UID: "9a114a56-f0bf-424e-b627-93415929b182"). InnerVolumeSpecName "kube-api-access-f5j7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.423912 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a114a56-f0bf-424e-b627-93415929b182-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9a114a56-f0bf-424e-b627-93415929b182" (UID: "9a114a56-f0bf-424e-b627-93415929b182"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.514900 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5j7g\" (UniqueName: \"kubernetes.io/projected/9a114a56-f0bf-424e-b627-93415929b182-kube-api-access-f5j7g\") on node \"crc\" DevicePath \"\"" Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.514950 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a114a56-f0bf-424e-b627-93415929b182-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.514964 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a114a56-f0bf-424e-b627-93415929b182-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.840200 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" event={"ID":"9a114a56-f0bf-424e-b627-93415929b182","Type":"ContainerDied","Data":"b7d34f0b95eb644cb6182b5da8b1ab5fe20c15f76ea520fb3aed45abb88bd318"} Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.840242 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d34f0b95eb644cb6182b5da8b1ab5fe20c15f76ea520fb3aed45abb88bd318" Dec 09 17:15:03 crc kubenswrapper[4853]: I1209 17:15:03.840302 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp" Dec 09 17:15:06 crc kubenswrapper[4853]: I1209 17:15:06.260940 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75b46f54d-7z4tq" Dec 09 17:15:25 crc kubenswrapper[4853]: I1209 17:15:25.967685 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76f597947c-hpbsl" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.769957 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd"] Dec 09 17:15:26 crc kubenswrapper[4853]: E1209 17:15:26.770555 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a114a56-f0bf-424e-b627-93415929b182" containerName="collect-profiles" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.770571 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a114a56-f0bf-424e-b627-93415929b182" containerName="collect-profiles" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.770733 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a114a56-f0bf-424e-b627-93415929b182" containerName="collect-profiles" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.771253 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.773144 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zcq9k" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.773715 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.780411 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pn5xm"] Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.784124 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.785873 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.786822 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.826484 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd"] Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.886460 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9f624"] Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.887723 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9f624" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.891973 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-hkdc7" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.892191 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.892430 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.892580 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.899212 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-vrl25"] Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.900879 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.904439 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.910392 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-vrl25"] Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.924339 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-frr-conf\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.924383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-569vl\" (UniqueName: \"kubernetes.io/projected/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-kube-api-access-569vl\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.924419 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-metrics\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.924467 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-metrics-certs\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.924494 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-frr-startup\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.924530 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7963b96f-5ec5-4e4d-a505-286baf8fec0a-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9b5hd\" (UID: \"7963b96f-5ec5-4e4d-a505-286baf8fec0a\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.924553 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-frr-sockets\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.924575 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spvfd\" (UniqueName: \"kubernetes.io/projected/7963b96f-5ec5-4e4d-a505-286baf8fec0a-kube-api-access-spvfd\") pod \"frr-k8s-webhook-server-7fcb986d4-9b5hd\" (UID: \"7963b96f-5ec5-4e4d-a505-286baf8fec0a\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:26 crc kubenswrapper[4853]: I1209 17:15:26.924615 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-reloader\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026170 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-frr-sockets\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026513 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spvfd\" (UniqueName: \"kubernetes.io/projected/7963b96f-5ec5-4e4d-a505-286baf8fec0a-kube-api-access-spvfd\") pod \"frr-k8s-webhook-server-7fcb986d4-9b5hd\" (UID: \"7963b96f-5ec5-4e4d-a505-286baf8fec0a\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026559 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-reloader\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026630 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbt8m\" (UniqueName: \"kubernetes.io/projected/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-kube-api-access-gbt8m\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-frr-conf\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026686 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-frr-sockets\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026704 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-569vl\" (UniqueName: \"kubernetes.io/projected/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-kube-api-access-569vl\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026757 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-metrics\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026793 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-metrics-certs\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026830 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-metallb-excludel2\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-cert\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026889 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-metrics-certs\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026924 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-frr-startup\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026956 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-metrics-certs\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.026991 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.027018 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7963b96f-5ec5-4e4d-a505-286baf8fec0a-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9b5hd\" (UID: \"7963b96f-5ec5-4e4d-a505-286baf8fec0a\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.027041 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzls9\" (UniqueName: \"kubernetes.io/projected/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-kube-api-access-jzls9\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.027131 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-reloader\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.027425 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-metrics\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: E1209 17:15:27.027550 4853 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 09 17:15:27 crc kubenswrapper[4853]: E1209 17:15:27.027617 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7963b96f-5ec5-4e4d-a505-286baf8fec0a-cert podName:7963b96f-5ec5-4e4d-a505-286baf8fec0a nodeName:}" failed. No retries permitted until 2025-12-09 17:15:27.527580073 +0000 UTC m=+1154.462319255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7963b96f-5ec5-4e4d-a505-286baf8fec0a-cert") pod "frr-k8s-webhook-server-7fcb986d4-9b5hd" (UID: "7963b96f-5ec5-4e4d-a505-286baf8fec0a") : secret "frr-k8s-webhook-server-cert" not found Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.027664 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-frr-conf\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.028357 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-frr-startup\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.038040 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-metrics-certs\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.043087 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spvfd\" (UniqueName: \"kubernetes.io/projected/7963b96f-5ec5-4e4d-a505-286baf8fec0a-kube-api-access-spvfd\") pod \"frr-k8s-webhook-server-7fcb986d4-9b5hd\" (UID: \"7963b96f-5ec5-4e4d-a505-286baf8fec0a\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.046507 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-569vl\" (UniqueName: \"kubernetes.io/projected/4cfdbc7f-7f33-4e37-97ed-d568fe27219c-kube-api-access-569vl\") pod \"frr-k8s-pn5xm\" (UID: \"4cfdbc7f-7f33-4e37-97ed-d568fe27219c\") " pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.106246 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.128040 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbt8m\" (UniqueName: \"kubernetes.io/projected/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-kube-api-access-gbt8m\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.128126 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-metrics-certs\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.128157 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-cert\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.128176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-metallb-excludel2\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.128264 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-metrics-certs\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: E1209 17:15:27.128295 4853 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 09 17:15:27 crc kubenswrapper[4853]: E1209 17:15:27.128369 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-metrics-certs podName:3830e853-f59e-47bc-8fce-ef3f9a6e3d24 nodeName:}" failed. No retries permitted until 2025-12-09 17:15:27.628350388 +0000 UTC m=+1154.563089570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-metrics-certs") pod "controller-f8648f98b-vrl25" (UID: "3830e853-f59e-47bc-8fce-ef3f9a6e3d24") : secret "controller-certs-secret" not found Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.128307 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: E1209 17:15:27.128405 4853 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 17:15:27 crc kubenswrapper[4853]: E1209 17:15:27.128459 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist podName:d69e1c5d-c55e-4df0-9a28-1b5f9de9136c nodeName:}" failed. No retries permitted until 2025-12-09 17:15:27.628441721 +0000 UTC m=+1154.563180903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist") pod "speaker-9f624" (UID: "d69e1c5d-c55e-4df0-9a28-1b5f9de9136c") : secret "metallb-memberlist" not found Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.128492 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzls9\" (UniqueName: \"kubernetes.io/projected/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-kube-api-access-jzls9\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.129132 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-metallb-excludel2\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.131586 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-metrics-certs\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.131833 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.144042 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-cert\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.144541 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbt8m\" (UniqueName: \"kubernetes.io/projected/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-kube-api-access-gbt8m\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.145466 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzls9\" (UniqueName: \"kubernetes.io/projected/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-kube-api-access-jzls9\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.534542 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7963b96f-5ec5-4e4d-a505-286baf8fec0a-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9b5hd\" (UID: \"7963b96f-5ec5-4e4d-a505-286baf8fec0a\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.545536 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7963b96f-5ec5-4e4d-a505-286baf8fec0a-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9b5hd\" (UID: \"7963b96f-5ec5-4e4d-a505-286baf8fec0a\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.636311 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:27 crc kubenswrapper[4853]: E1209 17:15:27.636546 4853 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.636675 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-metrics-certs\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: E1209 17:15:27.636696 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist podName:d69e1c5d-c55e-4df0-9a28-1b5f9de9136c nodeName:}" failed. No retries permitted until 2025-12-09 17:15:28.636665418 +0000 UTC m=+1155.571404650 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist") pod "speaker-9f624" (UID: "d69e1c5d-c55e-4df0-9a28-1b5f9de9136c") : secret "metallb-memberlist" not found Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.644382 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3830e853-f59e-47bc-8fce-ef3f9a6e3d24-metrics-certs\") pod \"controller-f8648f98b-vrl25\" (UID: \"3830e853-f59e-47bc-8fce-ef3f9a6e3d24\") " pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.694545 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:27 crc kubenswrapper[4853]: I1209 17:15:27.830895 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:28 crc kubenswrapper[4853]: I1209 17:15:28.060925 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerStarted","Data":"88275b2634202ecc34bfb43f05fd6ca7e28ffa23a2ab30cf975f0c0c9126444f"} Dec 09 17:15:28 crc kubenswrapper[4853]: W1209 17:15:28.268501 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7963b96f_5ec5_4e4d_a505_286baf8fec0a.slice/crio-00015ac8c369ecb95d5def45a6811d1f507972008b575ee21bd0f458d206c5cf WatchSource:0}: Error finding container 00015ac8c369ecb95d5def45a6811d1f507972008b575ee21bd0f458d206c5cf: Status 404 returned error can't find the container with id 00015ac8c369ecb95d5def45a6811d1f507972008b575ee21bd0f458d206c5cf Dec 09 17:15:28 crc kubenswrapper[4853]: I1209 17:15:28.274844 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd"] Dec 09 17:15:28 crc kubenswrapper[4853]: I1209 17:15:28.408824 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-vrl25"] Dec 09 17:15:28 crc kubenswrapper[4853]: I1209 17:15:28.669922 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:28 crc kubenswrapper[4853]: E1209 17:15:28.670173 4853 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 17:15:28 crc kubenswrapper[4853]: E1209 17:15:28.670232 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist podName:d69e1c5d-c55e-4df0-9a28-1b5f9de9136c nodeName:}" failed. No retries permitted until 2025-12-09 17:15:30.670214191 +0000 UTC m=+1157.604953383 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist") pod "speaker-9f624" (UID: "d69e1c5d-c55e-4df0-9a28-1b5f9de9136c") : secret "metallb-memberlist" not found Dec 09 17:15:29 crc kubenswrapper[4853]: I1209 17:15:29.068695 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-vrl25" event={"ID":"3830e853-f59e-47bc-8fce-ef3f9a6e3d24","Type":"ContainerStarted","Data":"b9a9c8c27e7401a4ed2d5d8fe065f06d9048960ddf2927205552d96fc3996fa8"} Dec 09 17:15:29 crc kubenswrapper[4853]: I1209 17:15:29.069069 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-vrl25" event={"ID":"3830e853-f59e-47bc-8fce-ef3f9a6e3d24","Type":"ContainerStarted","Data":"ae665681feebb261a222acc435d1b89d8ec079e0b3a6e5d28481171f91594c5b"} Dec 09 17:15:29 crc kubenswrapper[4853]: I1209 17:15:29.069082 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:29 crc kubenswrapper[4853]: I1209 17:15:29.069091 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-vrl25" event={"ID":"3830e853-f59e-47bc-8fce-ef3f9a6e3d24","Type":"ContainerStarted","Data":"6b6f32d4125488b2a257be0b9f9e2809dea73d84d0d1ca2a7ce519f22093259a"} Dec 09 17:15:29 crc kubenswrapper[4853]: I1209 17:15:29.070868 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" event={"ID":"7963b96f-5ec5-4e4d-a505-286baf8fec0a","Type":"ContainerStarted","Data":"00015ac8c369ecb95d5def45a6811d1f507972008b575ee21bd0f458d206c5cf"} Dec 09 17:15:29 crc kubenswrapper[4853]: I1209 17:15:29.087826 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-vrl25" podStartSLOduration=3.087803123 podStartE2EDuration="3.087803123s" podCreationTimestamp="2025-12-09 17:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:15:29.082901857 +0000 UTC m=+1156.017641039" watchObservedRunningTime="2025-12-09 17:15:29.087803123 +0000 UTC m=+1156.022542305" Dec 09 17:15:30 crc kubenswrapper[4853]: I1209 17:15:30.701506 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:30 crc kubenswrapper[4853]: I1209 17:15:30.723418 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d69e1c5d-c55e-4df0-9a28-1b5f9de9136c-memberlist\") pod \"speaker-9f624\" (UID: \"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c\") " pod="metallb-system/speaker-9f624" Dec 09 17:15:30 crc kubenswrapper[4853]: I1209 17:15:30.817557 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9f624" Dec 09 17:15:31 crc kubenswrapper[4853]: I1209 17:15:31.112656 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9f624" event={"ID":"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c","Type":"ContainerStarted","Data":"c720815a48005bf89274ddb9dae7f9f091010362a1626169230e6b9368413b2b"} Dec 09 17:15:32 crc kubenswrapper[4853]: I1209 17:15:32.121329 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9f624" event={"ID":"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c","Type":"ContainerStarted","Data":"a25cc6b76380e0c14133c5ba1559f0f129ea0c49982f49d250d729ceb8e41dbe"} Dec 09 17:15:32 crc kubenswrapper[4853]: I1209 17:15:32.121637 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9f624" event={"ID":"d69e1c5d-c55e-4df0-9a28-1b5f9de9136c","Type":"ContainerStarted","Data":"46b4a878f66791b96a59613b0ea8c28a9ac2bb090d8dc137162e11cdb8d036f0"} Dec 09 17:15:32 crc kubenswrapper[4853]: I1209 17:15:32.122800 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9f624" Dec 09 17:15:32 crc kubenswrapper[4853]: I1209 17:15:32.141935 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9f624" podStartSLOduration=6.141914876 podStartE2EDuration="6.141914876s" podCreationTimestamp="2025-12-09 17:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:15:32.140764756 +0000 UTC m=+1159.075503938" watchObservedRunningTime="2025-12-09 17:15:32.141914876 +0000 UTC m=+1159.076654058" Dec 09 17:15:36 crc kubenswrapper[4853]: I1209 17:15:36.160431 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" event={"ID":"7963b96f-5ec5-4e4d-a505-286baf8fec0a","Type":"ContainerStarted","Data":"7f6085fb4a9583a352ded7ff9e04906d0d85070eef9bb8b9fed5a51dd4efc213"} Dec 09 17:15:36 crc kubenswrapper[4853]: I1209 17:15:36.161051 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:36 crc kubenswrapper[4853]: I1209 17:15:36.162869 4853 generic.go:334] "Generic (PLEG): container finished" podID="4cfdbc7f-7f33-4e37-97ed-d568fe27219c" containerID="2292ac36b00bf4fc7deb7a4c061e5c38d64c03c7a1da42c868629ec2e8638bdd" exitCode=0 Dec 09 17:15:36 crc kubenswrapper[4853]: I1209 17:15:36.162911 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerDied","Data":"2292ac36b00bf4fc7deb7a4c061e5c38d64c03c7a1da42c868629ec2e8638bdd"} Dec 09 17:15:36 crc kubenswrapper[4853]: I1209 17:15:36.183055 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" podStartSLOduration=2.708703058 podStartE2EDuration="10.183032849s" podCreationTimestamp="2025-12-09 17:15:26 +0000 UTC" firstStartedPulling="2025-12-09 17:15:28.270526968 +0000 UTC m=+1155.205266150" lastFinishedPulling="2025-12-09 17:15:35.744856749 +0000 UTC m=+1162.679595941" observedRunningTime="2025-12-09 17:15:36.17601942 +0000 UTC m=+1163.110758602" watchObservedRunningTime="2025-12-09 17:15:36.183032849 +0000 UTC m=+1163.117772031" Dec 09 17:15:37 crc kubenswrapper[4853]: I1209 17:15:37.171548 4853 generic.go:334] "Generic (PLEG): container finished" podID="4cfdbc7f-7f33-4e37-97ed-d568fe27219c" containerID="cb2bb369969727c80c0ddbededf9b6beeca29b2ec32934526ba69e0e59fa3a86" exitCode=0 Dec 09 17:15:37 crc kubenswrapper[4853]: I1209 17:15:37.171653 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerDied","Data":"cb2bb369969727c80c0ddbededf9b6beeca29b2ec32934526ba69e0e59fa3a86"} Dec 09 17:15:39 crc kubenswrapper[4853]: I1209 17:15:39.188932 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerStarted","Data":"691be6aedb7e912bb754a488fe42350339585cb3c77ae99190aeb7ea0e8e5afa"} Dec 09 17:15:40 crc kubenswrapper[4853]: I1209 17:15:40.200406 4853 generic.go:334] "Generic (PLEG): container finished" podID="4cfdbc7f-7f33-4e37-97ed-d568fe27219c" containerID="691be6aedb7e912bb754a488fe42350339585cb3c77ae99190aeb7ea0e8e5afa" exitCode=0 Dec 09 17:15:40 crc kubenswrapper[4853]: I1209 17:15:40.200467 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerDied","Data":"691be6aedb7e912bb754a488fe42350339585cb3c77ae99190aeb7ea0e8e5afa"} Dec 09 17:15:41 crc kubenswrapper[4853]: I1209 17:15:41.213131 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerStarted","Data":"059ae0276bdf0f96b8ea130905e4c072750628b0eb72c4db81a8069afaed7034"} Dec 09 17:15:41 crc kubenswrapper[4853]: I1209 17:15:41.213663 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerStarted","Data":"c022c53a0ec17c9baaf9a59a8732c522c6bb3c9dbc45248b75e8c7084fbdcc7a"} Dec 09 17:15:41 crc kubenswrapper[4853]: I1209 17:15:41.213675 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerStarted","Data":"4b225e2df8f96a1beb03a25a39f6b29847fc06d69b172020cd5bf62caaa8859e"} Dec 09 17:15:41 crc kubenswrapper[4853]: I1209 17:15:41.213683 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerStarted","Data":"9e0dab28bd376038f8b1dc3fbf48b39e487954264f6984dcfe17eef7efb72063"} Dec 09 17:15:41 crc kubenswrapper[4853]: I1209 17:15:41.213691 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerStarted","Data":"889a024ecd6c9ae29308eb1c9e71ea81e71e91872b81dab418b3f52382a191b9"} Dec 09 17:15:42 crc kubenswrapper[4853]: I1209 17:15:42.228544 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pn5xm" event={"ID":"4cfdbc7f-7f33-4e37-97ed-d568fe27219c","Type":"ContainerStarted","Data":"f5c3c763894d0439e2ed839ed326d5ca123358d2d093e6d7f4614c440164c3f0"} Dec 09 17:15:42 crc kubenswrapper[4853]: I1209 17:15:42.228742 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:42 crc kubenswrapper[4853]: I1209 17:15:42.254313 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pn5xm" podStartSLOduration=7.782684236 podStartE2EDuration="16.25429253s" podCreationTimestamp="2025-12-09 17:15:26 +0000 UTC" firstStartedPulling="2025-12-09 17:15:27.280110801 +0000 UTC m=+1154.214849983" lastFinishedPulling="2025-12-09 17:15:35.751719095 +0000 UTC m=+1162.686458277" observedRunningTime="2025-12-09 17:15:42.248942182 +0000 UTC m=+1169.183681364" watchObservedRunningTime="2025-12-09 17:15:42.25429253 +0000 UTC m=+1169.189031712" Dec 09 17:15:47 crc kubenswrapper[4853]: I1209 17:15:47.107022 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:47 crc kubenswrapper[4853]: I1209 17:15:47.170285 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:47 crc kubenswrapper[4853]: I1209 17:15:47.701449 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9b5hd" Dec 09 17:15:47 crc kubenswrapper[4853]: I1209 17:15:47.834845 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-vrl25" Dec 09 17:15:50 crc kubenswrapper[4853]: I1209 17:15:50.821715 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9f624" Dec 09 17:15:54 crc kubenswrapper[4853]: I1209 17:15:54.866431 4853 patch_prober.go:28] interesting pod/dns-default-5grrn container/dns namespace/openshift-dns: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=kubernetes Dec 09 17:15:54 crc kubenswrapper[4853]: I1209 17:15:54.868110 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-5grrn" podUID="7585c230-8db6-45bd-bd39-17d27ff826dd" containerName="dns" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.109753 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pn5xm" Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.255188 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pvfz9"] Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.258310 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pvfz9" Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.265785 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.266235 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-97hws" Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.266584 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.287699 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pvfz9"] Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.390527 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlprk\" (UniqueName: \"kubernetes.io/projected/39bb0b5d-d4ed-40d2-9c00-226b17a9ef09-kube-api-access-rlprk\") pod \"openstack-operator-index-pvfz9\" (UID: \"39bb0b5d-d4ed-40d2-9c00-226b17a9ef09\") " pod="openstack-operators/openstack-operator-index-pvfz9" Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.491459 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlprk\" (UniqueName: \"kubernetes.io/projected/39bb0b5d-d4ed-40d2-9c00-226b17a9ef09-kube-api-access-rlprk\") pod \"openstack-operator-index-pvfz9\" (UID: \"39bb0b5d-d4ed-40d2-9c00-226b17a9ef09\") " pod="openstack-operators/openstack-operator-index-pvfz9" Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.513374 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlprk\" (UniqueName: \"kubernetes.io/projected/39bb0b5d-d4ed-40d2-9c00-226b17a9ef09-kube-api-access-rlprk\") pod \"openstack-operator-index-pvfz9\" (UID: \"39bb0b5d-d4ed-40d2-9c00-226b17a9ef09\") " pod="openstack-operators/openstack-operator-index-pvfz9" Dec 09 17:15:57 crc kubenswrapper[4853]: I1209 17:15:57.592732 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pvfz9" Dec 09 17:15:58 crc kubenswrapper[4853]: I1209 17:15:58.088225 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pvfz9"] Dec 09 17:15:58 crc kubenswrapper[4853]: W1209 17:15:58.091968 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39bb0b5d_d4ed_40d2_9c00_226b17a9ef09.slice/crio-73dae5fe7a2043385deab86912a131a2b3a5ac25d04aa5d7c6cb631e90deebbe WatchSource:0}: Error finding container 73dae5fe7a2043385deab86912a131a2b3a5ac25d04aa5d7c6cb631e90deebbe: Status 404 returned error can't find the container with id 73dae5fe7a2043385deab86912a131a2b3a5ac25d04aa5d7c6cb631e90deebbe Dec 09 17:15:58 crc kubenswrapper[4853]: I1209 17:15:58.402613 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pvfz9" event={"ID":"39bb0b5d-d4ed-40d2-9c00-226b17a9ef09","Type":"ContainerStarted","Data":"73dae5fe7a2043385deab86912a131a2b3a5ac25d04aa5d7c6cb631e90deebbe"} Dec 09 17:15:58 crc kubenswrapper[4853]: I1209 17:15:58.593497 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:15:58 crc kubenswrapper[4853]: I1209 17:15:58.593567 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:16:01 crc kubenswrapper[4853]: I1209 17:16:01.498614 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pvfz9" event={"ID":"39bb0b5d-d4ed-40d2-9c00-226b17a9ef09","Type":"ContainerStarted","Data":"1ce9eade01d47ea4cd0237c0e2380ba3b177709e4a2f1b71dfe42dac6d74a3cf"} Dec 09 17:16:07 crc kubenswrapper[4853]: I1209 17:16:07.594112 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-pvfz9" Dec 09 17:16:07 crc kubenswrapper[4853]: I1209 17:16:07.594676 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-pvfz9" Dec 09 17:16:07 crc kubenswrapper[4853]: I1209 17:16:07.640646 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-pvfz9" Dec 09 17:16:07 crc kubenswrapper[4853]: I1209 17:16:07.662446 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pvfz9" podStartSLOduration=7.458781757 podStartE2EDuration="10.662422417s" podCreationTimestamp="2025-12-09 17:15:57 +0000 UTC" firstStartedPulling="2025-12-09 17:15:58.094264205 +0000 UTC m=+1185.029003397" lastFinishedPulling="2025-12-09 17:16:01.297904875 +0000 UTC m=+1188.232644057" observedRunningTime="2025-12-09 17:16:01.520749361 +0000 UTC m=+1188.455488553" watchObservedRunningTime="2025-12-09 17:16:07.662422417 +0000 UTC m=+1194.597161599" Dec 09 17:16:08 crc kubenswrapper[4853]: I1209 17:16:08.589591 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-pvfz9" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.679062 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl"] Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.681015 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.682861 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tlkzh" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.695851 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl"] Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.856539 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-util\") pod \"84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.856626 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzx5\" (UniqueName: \"kubernetes.io/projected/7fa53334-e1e4-4682-931a-889de208185b-kube-api-access-nmzx5\") pod \"84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.856690 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-bundle\") pod \"84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.958540 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-util\") pod \"84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.958640 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzx5\" (UniqueName: \"kubernetes.io/projected/7fa53334-e1e4-4682-931a-889de208185b-kube-api-access-nmzx5\") pod \"84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.958695 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-bundle\") pod \"84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.959128 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-util\") pod \"84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.959325 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-bundle\") pod \"84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:09 crc kubenswrapper[4853]: I1209 17:16:09.981814 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzx5\" (UniqueName: \"kubernetes.io/projected/7fa53334-e1e4-4682-931a-889de208185b-kube-api-access-nmzx5\") pod \"84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:10 crc kubenswrapper[4853]: I1209 17:16:10.007297 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:10 crc kubenswrapper[4853]: I1209 17:16:10.594197 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl"] Dec 09 17:16:11 crc kubenswrapper[4853]: I1209 17:16:11.588460 4853 generic.go:334] "Generic (PLEG): container finished" podID="7fa53334-e1e4-4682-931a-889de208185b" containerID="6b493d1a5da91d2ece280142a59799bd6f3761e6e63115eb18ee9d29cfa4b1bd" exitCode=0 Dec 09 17:16:11 crc kubenswrapper[4853]: I1209 17:16:11.588553 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" event={"ID":"7fa53334-e1e4-4682-931a-889de208185b","Type":"ContainerDied","Data":"6b493d1a5da91d2ece280142a59799bd6f3761e6e63115eb18ee9d29cfa4b1bd"} Dec 09 17:16:11 crc kubenswrapper[4853]: I1209 17:16:11.588752 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" event={"ID":"7fa53334-e1e4-4682-931a-889de208185b","Type":"ContainerStarted","Data":"0d62d04048d0ce8686350663ec0b41d78007fe69475851de5b810b28cfd2c63f"} Dec 09 17:16:12 crc kubenswrapper[4853]: I1209 17:16:12.603565 4853 generic.go:334] "Generic (PLEG): container finished" podID="7fa53334-e1e4-4682-931a-889de208185b" containerID="1b9c00c301c6487a287b869e0b519745679f0cd67f20d0d1f7280841d947f84e" exitCode=0 Dec 09 17:16:12 crc kubenswrapper[4853]: I1209 17:16:12.603688 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" event={"ID":"7fa53334-e1e4-4682-931a-889de208185b","Type":"ContainerDied","Data":"1b9c00c301c6487a287b869e0b519745679f0cd67f20d0d1f7280841d947f84e"} Dec 09 17:16:13 crc kubenswrapper[4853]: I1209 17:16:13.615265 4853 generic.go:334] "Generic (PLEG): container finished" podID="7fa53334-e1e4-4682-931a-889de208185b" containerID="1717ca41db44b86dbd4305a32f3ebde469a333ec1040bb5a7a0681af63d57d14" exitCode=0 Dec 09 17:16:13 crc kubenswrapper[4853]: I1209 17:16:13.615305 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" event={"ID":"7fa53334-e1e4-4682-931a-889de208185b","Type":"ContainerDied","Data":"1717ca41db44b86dbd4305a32f3ebde469a333ec1040bb5a7a0681af63d57d14"} Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.074429 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.248329 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmzx5\" (UniqueName: \"kubernetes.io/projected/7fa53334-e1e4-4682-931a-889de208185b-kube-api-access-nmzx5\") pod \"7fa53334-e1e4-4682-931a-889de208185b\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.248719 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-bundle\") pod \"7fa53334-e1e4-4682-931a-889de208185b\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.248888 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-util\") pod \"7fa53334-e1e4-4682-931a-889de208185b\" (UID: \"7fa53334-e1e4-4682-931a-889de208185b\") " Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.249809 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-bundle" (OuterVolumeSpecName: "bundle") pod "7fa53334-e1e4-4682-931a-889de208185b" (UID: "7fa53334-e1e4-4682-931a-889de208185b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.254267 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa53334-e1e4-4682-931a-889de208185b-kube-api-access-nmzx5" (OuterVolumeSpecName: "kube-api-access-nmzx5") pod "7fa53334-e1e4-4682-931a-889de208185b" (UID: "7fa53334-e1e4-4682-931a-889de208185b"). InnerVolumeSpecName "kube-api-access-nmzx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.264032 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-util" (OuterVolumeSpecName: "util") pod "7fa53334-e1e4-4682-931a-889de208185b" (UID: "7fa53334-e1e4-4682-931a-889de208185b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.352296 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmzx5\" (UniqueName: \"kubernetes.io/projected/7fa53334-e1e4-4682-931a-889de208185b-kube-api-access-nmzx5\") on node \"crc\" DevicePath \"\"" Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.352366 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.352397 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fa53334-e1e4-4682-931a-889de208185b-util\") on node \"crc\" DevicePath \"\"" Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.639288 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" event={"ID":"7fa53334-e1e4-4682-931a-889de208185b","Type":"ContainerDied","Data":"0d62d04048d0ce8686350663ec0b41d78007fe69475851de5b810b28cfd2c63f"} Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.639330 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d62d04048d0ce8686350663ec0b41d78007fe69475851de5b810b28cfd2c63f" Dec 09 17:16:15 crc kubenswrapper[4853]: I1209 17:16:15.639356 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.113810 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv"] Dec 09 17:16:19 crc kubenswrapper[4853]: E1209 17:16:19.117740 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa53334-e1e4-4682-931a-889de208185b" containerName="extract" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.117782 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa53334-e1e4-4682-931a-889de208185b" containerName="extract" Dec 09 17:16:19 crc kubenswrapper[4853]: E1209 17:16:19.117818 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa53334-e1e4-4682-931a-889de208185b" containerName="util" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.117826 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa53334-e1e4-4682-931a-889de208185b" containerName="util" Dec 09 17:16:19 crc kubenswrapper[4853]: E1209 17:16:19.117850 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa53334-e1e4-4682-931a-889de208185b" containerName="pull" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.117856 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa53334-e1e4-4682-931a-889de208185b" containerName="pull" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.118038 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa53334-e1e4-4682-931a-889de208185b" containerName="extract" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.123126 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.136864 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv"] Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.138842 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-8rbhg" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.228156 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hgf8\" (UniqueName: \"kubernetes.io/projected/dec17fb3-8dd0-4c56-b058-9d0ab2fae769-kube-api-access-8hgf8\") pod \"openstack-operator-controller-operator-5f5557f974-498cv\" (UID: \"dec17fb3-8dd0-4c56-b058-9d0ab2fae769\") " pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.329733 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hgf8\" (UniqueName: \"kubernetes.io/projected/dec17fb3-8dd0-4c56-b058-9d0ab2fae769-kube-api-access-8hgf8\") pod \"openstack-operator-controller-operator-5f5557f974-498cv\" (UID: \"dec17fb3-8dd0-4c56-b058-9d0ab2fae769\") " pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.355626 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hgf8\" (UniqueName: \"kubernetes.io/projected/dec17fb3-8dd0-4c56-b058-9d0ab2fae769-kube-api-access-8hgf8\") pod \"openstack-operator-controller-operator-5f5557f974-498cv\" (UID: \"dec17fb3-8dd0-4c56-b058-9d0ab2fae769\") " pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.461289 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" Dec 09 17:16:19 crc kubenswrapper[4853]: I1209 17:16:19.957888 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv"] Dec 09 17:16:19 crc kubenswrapper[4853]: W1209 17:16:19.962349 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddec17fb3_8dd0_4c56_b058_9d0ab2fae769.slice/crio-529e7bc81379e460e0f818b5f5a8fb32c853d327605a21b48edf16e323c07ccf WatchSource:0}: Error finding container 529e7bc81379e460e0f818b5f5a8fb32c853d327605a21b48edf16e323c07ccf: Status 404 returned error can't find the container with id 529e7bc81379e460e0f818b5f5a8fb32c853d327605a21b48edf16e323c07ccf Dec 09 17:16:20 crc kubenswrapper[4853]: I1209 17:16:20.685354 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" event={"ID":"dec17fb3-8dd0-4c56-b058-9d0ab2fae769","Type":"ContainerStarted","Data":"529e7bc81379e460e0f818b5f5a8fb32c853d327605a21b48edf16e323c07ccf"} Dec 09 17:16:25 crc kubenswrapper[4853]: I1209 17:16:25.748207 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" event={"ID":"dec17fb3-8dd0-4c56-b058-9d0ab2fae769","Type":"ContainerStarted","Data":"362ba6cd7dd58cce4f9fed0a56c7d294c13a412359c7a630c25027184302e4b5"} Dec 09 17:16:25 crc kubenswrapper[4853]: I1209 17:16:25.748774 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" Dec 09 17:16:25 crc kubenswrapper[4853]: I1209 17:16:25.777494 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" podStartSLOduration=1.373710019 podStartE2EDuration="6.777477326s" podCreationTimestamp="2025-12-09 17:16:19 +0000 UTC" firstStartedPulling="2025-12-09 17:16:19.966401761 +0000 UTC m=+1206.901140943" lastFinishedPulling="2025-12-09 17:16:25.370169068 +0000 UTC m=+1212.304908250" observedRunningTime="2025-12-09 17:16:25.770486406 +0000 UTC m=+1212.705225598" watchObservedRunningTime="2025-12-09 17:16:25.777477326 +0000 UTC m=+1212.712216508" Dec 09 17:16:28 crc kubenswrapper[4853]: I1209 17:16:28.592476 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:16:28 crc kubenswrapper[4853]: I1209 17:16:28.593059 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:16:39 crc kubenswrapper[4853]: I1209 17:16:39.464322 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5f5557f974-498cv" Dec 09 17:16:58 crc kubenswrapper[4853]: I1209 17:16:58.593520 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:16:58 crc kubenswrapper[4853]: I1209 17:16:58.594138 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:16:58 crc kubenswrapper[4853]: I1209 17:16:58.594213 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:16:58 crc kubenswrapper[4853]: I1209 17:16:58.595099 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff410bbb47eb0d8e5f80ec7cc8ea558647698f94ee7441c8421ab12f2216ccf7"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:16:58 crc kubenswrapper[4853]: I1209 17:16:58.595174 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://ff410bbb47eb0d8e5f80ec7cc8ea558647698f94ee7441c8421ab12f2216ccf7" gracePeriod=600 Dec 09 17:16:59 crc kubenswrapper[4853]: I1209 17:16:59.165436 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="ff410bbb47eb0d8e5f80ec7cc8ea558647698f94ee7441c8421ab12f2216ccf7" exitCode=0 Dec 09 17:16:59 crc kubenswrapper[4853]: I1209 17:16:59.165510 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"ff410bbb47eb0d8e5f80ec7cc8ea558647698f94ee7441c8421ab12f2216ccf7"} Dec 09 17:16:59 crc kubenswrapper[4853]: I1209 17:16:59.165793 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"f6f987a0c43c35d8870f761c6c8a9e4bd42afed53db05f41b90af0f3121049ce"} Dec 09 17:16:59 crc kubenswrapper[4853]: I1209 17:16:59.165835 4853 scope.go:117] "RemoveContainer" containerID="0f1e13e4d459d808e60b6045bc9ee20eb1af70f1e10bf78d11a94776b32f36e9" Dec 09 17:17:01 crc kubenswrapper[4853]: I1209 17:17:01.945664 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf"] Dec 09 17:17:01 crc kubenswrapper[4853]: I1209 17:17:01.948007 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" Dec 09 17:17:01 crc kubenswrapper[4853]: I1209 17:17:01.950207 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ms5nq" Dec 09 17:17:01 crc kubenswrapper[4853]: I1209 17:17:01.956538 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt"] Dec 09 17:17:01 crc kubenswrapper[4853]: I1209 17:17:01.958219 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" Dec 09 17:17:01 crc kubenswrapper[4853]: I1209 17:17:01.960740 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-v66m2" Dec 09 17:17:01 crc kubenswrapper[4853]: I1209 17:17:01.980841 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.002070 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.012464 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.013864 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.015381 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgw2g\" (UniqueName: \"kubernetes.io/projected/29813971-d50f-4186-88c8-380d54284514-kube-api-access-lgw2g\") pod \"barbican-operator-controller-manager-7d9dfd778-bj2bf\" (UID: \"29813971-d50f-4186-88c8-380d54284514\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.015431 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9rp\" (UniqueName: \"kubernetes.io/projected/967443dd-77e8-4090-a90e-c7e5f2152acb-kube-api-access-2p9rp\") pod \"cinder-operator-controller-manager-6c677c69b-xzhzt\" (UID: \"967443dd-77e8-4090-a90e-c7e5f2152acb\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.016234 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-22c5z" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.042704 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-6j847"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.043919 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.048504 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8fmjf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.062187 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.071430 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-6j847"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.113657 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.115051 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.117710 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskgz\" (UniqueName: \"kubernetes.io/projected/fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf-kube-api-access-jskgz\") pod \"glance-operator-controller-manager-5697bb5779-6j847\" (UID: \"fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.117786 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgw2g\" (UniqueName: \"kubernetes.io/projected/29813971-d50f-4186-88c8-380d54284514-kube-api-access-lgw2g\") pod \"barbican-operator-controller-manager-7d9dfd778-bj2bf\" (UID: \"29813971-d50f-4186-88c8-380d54284514\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.118033 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.118077 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9rp\" (UniqueName: \"kubernetes.io/projected/967443dd-77e8-4090-a90e-c7e5f2152acb-kube-api-access-2p9rp\") pod \"cinder-operator-controller-manager-6c677c69b-xzhzt\" (UID: \"967443dd-77e8-4090-a90e-c7e5f2152acb\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.118145 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2kn\" (UniqueName: \"kubernetes.io/projected/565b5b04-34ef-414f-8316-0b6ea0f7835e-kube-api-access-cb2kn\") pod \"designate-operator-controller-manager-697fb699cf-cxf5s\" (UID: \"565b5b04-34ef-414f-8316-0b6ea0f7835e\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.124090 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ckn99" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.131734 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.133344 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.134832 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-gb6c6" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.149694 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.151040 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.157037 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tr5hj" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.161483 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgw2g\" (UniqueName: \"kubernetes.io/projected/29813971-d50f-4186-88c8-380d54284514-kube-api-access-lgw2g\") pod \"barbican-operator-controller-manager-7d9dfd778-bj2bf\" (UID: \"29813971-d50f-4186-88c8-380d54284514\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.168482 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.172976 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.176207 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9rp\" (UniqueName: \"kubernetes.io/projected/967443dd-77e8-4090-a90e-c7e5f2152acb-kube-api-access-2p9rp\") pod \"cinder-operator-controller-manager-6c677c69b-xzhzt\" (UID: \"967443dd-77e8-4090-a90e-c7e5f2152acb\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.182763 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.183938 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.188807 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.194318 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-h56mq" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.207966 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.214672 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.216523 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.220775 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jskgz\" (UniqueName: \"kubernetes.io/projected/fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf-kube-api-access-jskgz\") pod \"glance-operator-controller-manager-5697bb5779-6j847\" (UID: \"fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.220826 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.220901 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2kn\" (UniqueName: \"kubernetes.io/projected/565b5b04-34ef-414f-8316-0b6ea0f7835e-kube-api-access-cb2kn\") pod \"designate-operator-controller-manager-697fb699cf-cxf5s\" (UID: \"565b5b04-34ef-414f-8316-0b6ea0f7835e\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.220963 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bswg\" (UniqueName: \"kubernetes.io/projected/27577524-15bd-403c-9a4f-a693e212b9d3-kube-api-access-4bswg\") pod \"horizon-operator-controller-manager-68c6d99b8f-qvpfh\" (UID: \"27577524-15bd-403c-9a4f-a693e212b9d3\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.221013 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5kvx\" (UniqueName: \"kubernetes.io/projected/716e46e2-2382-4568-956b-eb55e54cbc92-kube-api-access-c5kvx\") pod \"heat-operator-controller-manager-5f64f6f8bb-f5fm2\" (UID: \"716e46e2-2382-4568-956b-eb55e54cbc92\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.221058 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjfpn\" (UniqueName: \"kubernetes.io/projected/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-kube-api-access-vjfpn\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.221236 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-pxw6f" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.224544 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.240212 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.247168 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2kn\" (UniqueName: \"kubernetes.io/projected/565b5b04-34ef-414f-8316-0b6ea0f7835e-kube-api-access-cb2kn\") pod \"designate-operator-controller-manager-697fb699cf-cxf5s\" (UID: \"565b5b04-34ef-414f-8316-0b6ea0f7835e\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.248747 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.254460 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9r8zg" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.264390 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jskgz\" (UniqueName: \"kubernetes.io/projected/fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf-kube-api-access-jskgz\") pod \"glance-operator-controller-manager-5697bb5779-6j847\" (UID: \"fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.266073 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.278094 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.307888 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.338858 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.372317 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.381414 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bswg\" (UniqueName: \"kubernetes.io/projected/27577524-15bd-403c-9a4f-a693e212b9d3-kube-api-access-4bswg\") pod \"horizon-operator-controller-manager-68c6d99b8f-qvpfh\" (UID: \"27577524-15bd-403c-9a4f-a693e212b9d3\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.382003 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5kvx\" (UniqueName: \"kubernetes.io/projected/716e46e2-2382-4568-956b-eb55e54cbc92-kube-api-access-c5kvx\") pod \"heat-operator-controller-manager-5f64f6f8bb-f5fm2\" (UID: \"716e46e2-2382-4568-956b-eb55e54cbc92\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.382069 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjfpn\" (UniqueName: \"kubernetes.io/projected/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-kube-api-access-vjfpn\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.382159 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5spf2\" (UniqueName: \"kubernetes.io/projected/3861e360-3725-49dd-9201-9efb6bcaf978-kube-api-access-5spf2\") pod \"ironic-operator-controller-manager-967d97867-h4nnl\" (UID: \"3861e360-3725-49dd-9201-9efb6bcaf978\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.382208 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5wr\" (UniqueName: \"kubernetes.io/projected/808c6195-03f9-4f53-8cc0-8a70dc0d9588-kube-api-access-dx5wr\") pod \"manila-operator-controller-manager-5b5fd79c9c-qtnbj\" (UID: \"808c6195-03f9-4f53-8cc0-8a70dc0d9588\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.382229 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.382300 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bst\" (UniqueName: \"kubernetes.io/projected/4f498f4a-152d-4c28-85b6-71fdeb32d148-kube-api-access-q8bst\") pod \"keystone-operator-controller-manager-7765d96ddf-vhm5h\" (UID: \"4f498f4a-152d-4c28-85b6-71fdeb32d148\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.383121 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.383444 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ntq2d" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.383636 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj"] Dec 09 17:17:02 crc kubenswrapper[4853]: E1209 17:17:02.385135 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:02 crc kubenswrapper[4853]: E1209 17:17:02.385184 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert podName:a4ed8e4a-54de-45d2-962c-7fdbfd49b302 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:02.885169467 +0000 UTC m=+1249.819908649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert") pod "infra-operator-controller-manager-78d48bff9d-lf994" (UID: "a4ed8e4a-54de-45d2-962c-7fdbfd49b302") : secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.394453 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.397797 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.404338 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xwtv8" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.404966 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bswg\" (UniqueName: \"kubernetes.io/projected/27577524-15bd-403c-9a4f-a693e212b9d3-kube-api-access-4bswg\") pod \"horizon-operator-controller-manager-68c6d99b8f-qvpfh\" (UID: \"27577524-15bd-403c-9a4f-a693e212b9d3\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.406100 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.408356 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5kvx\" (UniqueName: \"kubernetes.io/projected/716e46e2-2382-4568-956b-eb55e54cbc92-kube-api-access-c5kvx\") pod \"heat-operator-controller-manager-5f64f6f8bb-f5fm2\" (UID: \"716e46e2-2382-4568-956b-eb55e54cbc92\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.413312 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.426993 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.429522 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7dzrt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.429894 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjfpn\" (UniqueName: \"kubernetes.io/projected/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-kube-api-access-vjfpn\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.446129 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.478536 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.491102 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5spf2\" (UniqueName: \"kubernetes.io/projected/3861e360-3725-49dd-9201-9efb6bcaf978-kube-api-access-5spf2\") pod \"ironic-operator-controller-manager-967d97867-h4nnl\" (UID: \"3861e360-3725-49dd-9201-9efb6bcaf978\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.491165 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5wr\" (UniqueName: \"kubernetes.io/projected/808c6195-03f9-4f53-8cc0-8a70dc0d9588-kube-api-access-dx5wr\") pod \"manila-operator-controller-manager-5b5fd79c9c-qtnbj\" (UID: \"808c6195-03f9-4f53-8cc0-8a70dc0d9588\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.491212 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crbtm\" (UniqueName: \"kubernetes.io/projected/317f5d16-66f6-42fb-b6b7-01ad51915f20-kube-api-access-crbtm\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jq42w\" (UID: \"317f5d16-66f6-42fb-b6b7-01ad51915f20\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.491258 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5bm9\" (UniqueName: \"kubernetes.io/projected/da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a-kube-api-access-v5bm9\") pod \"mariadb-operator-controller-manager-79c8c4686c-6c956\" (UID: \"da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.491301 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bst\" (UniqueName: \"kubernetes.io/projected/4f498f4a-152d-4c28-85b6-71fdeb32d148-kube-api-access-q8bst\") pod \"keystone-operator-controller-manager-7765d96ddf-vhm5h\" (UID: \"4f498f4a-152d-4c28-85b6-71fdeb32d148\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.494682 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-77blt"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.496484 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.499738 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fjnsq" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.519388 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5spf2\" (UniqueName: \"kubernetes.io/projected/3861e360-3725-49dd-9201-9efb6bcaf978-kube-api-access-5spf2\") pod \"ironic-operator-controller-manager-967d97867-h4nnl\" (UID: \"3861e360-3725-49dd-9201-9efb6bcaf978\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.520130 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-77blt"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.521573 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bst\" (UniqueName: \"kubernetes.io/projected/4f498f4a-152d-4c28-85b6-71fdeb32d148-kube-api-access-q8bst\") pod \"keystone-operator-controller-manager-7765d96ddf-vhm5h\" (UID: \"4f498f4a-152d-4c28-85b6-71fdeb32d148\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.522057 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.524154 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5wr\" (UniqueName: \"kubernetes.io/projected/808c6195-03f9-4f53-8cc0-8a70dc0d9588-kube-api-access-dx5wr\") pod \"manila-operator-controller-manager-5b5fd79c9c-qtnbj\" (UID: \"808c6195-03f9-4f53-8cc0-8a70dc0d9588\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.534428 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.541693 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.544952 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.551055 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-29rt5" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.551371 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.556693 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.567810 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.570249 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.576672 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.580137 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.586739 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tjrhs" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.587265 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-cmskq" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.589425 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.597311 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbtm\" (UniqueName: \"kubernetes.io/projected/317f5d16-66f6-42fb-b6b7-01ad51915f20-kube-api-access-crbtm\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jq42w\" (UID: \"317f5d16-66f6-42fb-b6b7-01ad51915f20\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.597381 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87nbr\" (UniqueName: \"kubernetes.io/projected/7f4def87-3330-40cc-863c-f6bfe07e9c2d-kube-api-access-87nbr\") pod \"octavia-operator-controller-manager-998648c74-77blt\" (UID: \"7f4def87-3330-40cc-863c-f6bfe07e9c2d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.597410 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd996\" (UniqueName: \"kubernetes.io/projected/8c543f97-25ca-48a7-8b42-120884dee80b-kube-api-access-wd996\") pod \"nova-operator-controller-manager-697bc559fc-kgtxf\" (UID: \"8c543f97-25ca-48a7-8b42-120884dee80b\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.597430 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5bm9\" (UniqueName: \"kubernetes.io/projected/da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a-kube-api-access-v5bm9\") pod \"mariadb-operator-controller-manager-79c8c4686c-6c956\" (UID: \"da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.602993 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.623915 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.633073 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5bm9\" (UniqueName: \"kubernetes.io/projected/da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a-kube-api-access-v5bm9\") pod \"mariadb-operator-controller-manager-79c8c4686c-6c956\" (UID: \"da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.651753 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbtm\" (UniqueName: \"kubernetes.io/projected/317f5d16-66f6-42fb-b6b7-01ad51915f20-kube-api-access-crbtm\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jq42w\" (UID: \"317f5d16-66f6-42fb-b6b7-01ad51915f20\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.656341 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.660097 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-f96rs" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.667024 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.680232 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.681682 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.684352 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6tjp2" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.688523 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.699057 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.699261 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2pkm\" (UniqueName: \"kubernetes.io/projected/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-kube-api-access-z2pkm\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.699281 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkc5w\" (UniqueName: \"kubernetes.io/projected/e9f17b0a-4f03-460b-b0e9-743882aa435e-kube-api-access-jkc5w\") pod \"ovn-operator-controller-manager-b6456fdb6-cgkl7\" (UID: \"e9f17b0a-4f03-460b-b0e9-743882aa435e\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.699320 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6dk\" (UniqueName: \"kubernetes.io/projected/0a7fce2f-b4b3-4f6c-b417-aa159e161722-kube-api-access-kp6dk\") pod \"placement-operator-controller-manager-78f8948974-nfkqh\" (UID: \"0a7fce2f-b4b3-4f6c-b417-aa159e161722\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.699366 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87nbr\" (UniqueName: \"kubernetes.io/projected/7f4def87-3330-40cc-863c-f6bfe07e9c2d-kube-api-access-87nbr\") pod \"octavia-operator-controller-manager-998648c74-77blt\" (UID: \"7f4def87-3330-40cc-863c-f6bfe07e9c2d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.699389 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd996\" (UniqueName: \"kubernetes.io/projected/8c543f97-25ca-48a7-8b42-120884dee80b-kube-api-access-wd996\") pod \"nova-operator-controller-manager-697bc559fc-kgtxf\" (UID: \"8c543f97-25ca-48a7-8b42-120884dee80b\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.701448 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.709942 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.727715 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87nbr\" (UniqueName: \"kubernetes.io/projected/7f4def87-3330-40cc-863c-f6bfe07e9c2d-kube-api-access-87nbr\") pod \"octavia-operator-controller-manager-998648c74-77blt\" (UID: \"7f4def87-3330-40cc-863c-f6bfe07e9c2d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.736168 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd996\" (UniqueName: \"kubernetes.io/projected/8c543f97-25ca-48a7-8b42-120884dee80b-kube-api-access-wd996\") pod \"nova-operator-controller-manager-697bc559fc-kgtxf\" (UID: \"8c543f97-25ca-48a7-8b42-120884dee80b\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.741093 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.741584 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.743281 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.751056 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gjgg9" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.764008 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.769326 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.794471 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.809701 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.809842 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsnkj\" (UniqueName: \"kubernetes.io/projected/234b75e2-2793-4ec3-ab45-c3603ae69436-kube-api-access-rsnkj\") pod \"telemetry-operator-controller-manager-796785f986-g89lr\" (UID: \"234b75e2-2793-4ec3-ab45-c3603ae69436\") " pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.809920 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjxsj\" (UniqueName: \"kubernetes.io/projected/eff79899-1ea3-418a-86fc-f988303b6da5-kube-api-access-bjxsj\") pod \"swift-operator-controller-manager-9d58d64bc-9bq2r\" (UID: \"eff79899-1ea3-418a-86fc-f988303b6da5\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.809968 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkc5w\" (UniqueName: \"kubernetes.io/projected/e9f17b0a-4f03-460b-b0e9-743882aa435e-kube-api-access-jkc5w\") pod \"ovn-operator-controller-manager-b6456fdb6-cgkl7\" (UID: \"e9f17b0a-4f03-460b-b0e9-743882aa435e\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.810002 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2pkm\" (UniqueName: \"kubernetes.io/projected/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-kube-api-access-z2pkm\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.810066 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6dk\" (UniqueName: \"kubernetes.io/projected/0a7fce2f-b4b3-4f6c-b417-aa159e161722-kube-api-access-kp6dk\") pod \"placement-operator-controller-manager-78f8948974-nfkqh\" (UID: \"0a7fce2f-b4b3-4f6c-b417-aa159e161722\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.810106 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbg6w\" (UniqueName: \"kubernetes.io/projected/85276cf5-13f2-4890-9d08-07f5e01dc90c-kube-api-access-dbg6w\") pod \"test-operator-controller-manager-5854674fcc-s7rrn\" (UID: \"85276cf5-13f2-4890-9d08-07f5e01dc90c\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" Dec 09 17:17:02 crc kubenswrapper[4853]: E1209 17:17:02.814174 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:02 crc kubenswrapper[4853]: E1209 17:17:02.814282 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert podName:d2aeb9ff-da65-4fc1-8362-29c263f9f4c3 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:03.314265822 +0000 UTC m=+1250.249005004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fcmbqh" (UID: "d2aeb9ff-da65-4fc1-8362-29c263f9f4c3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.816590 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.825735 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.828746 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pmfhb" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.836643 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.853461 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2pkm\" (UniqueName: \"kubernetes.io/projected/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-kube-api-access-z2pkm\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.869333 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6dk\" (UniqueName: \"kubernetes.io/projected/0a7fce2f-b4b3-4f6c-b417-aa159e161722-kube-api-access-kp6dk\") pod \"placement-operator-controller-manager-78f8948974-nfkqh\" (UID: \"0a7fce2f-b4b3-4f6c-b417-aa159e161722\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.877047 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkc5w\" (UniqueName: \"kubernetes.io/projected/e9f17b0a-4f03-460b-b0e9-743882aa435e-kube-api-access-jkc5w\") pod \"ovn-operator-controller-manager-b6456fdb6-cgkl7\" (UID: \"e9f17b0a-4f03-460b-b0e9-743882aa435e\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.885646 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.911691 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.913475 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.917483 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsnkj\" (UniqueName: \"kubernetes.io/projected/234b75e2-2793-4ec3-ab45-c3603ae69436-kube-api-access-rsnkj\") pod \"telemetry-operator-controller-manager-796785f986-g89lr\" (UID: \"234b75e2-2793-4ec3-ab45-c3603ae69436\") " pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.918508 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjxsj\" (UniqueName: \"kubernetes.io/projected/eff79899-1ea3-418a-86fc-f988303b6da5-kube-api-access-bjxsj\") pod \"swift-operator-controller-manager-9d58d64bc-9bq2r\" (UID: \"eff79899-1ea3-418a-86fc-f988303b6da5\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.918573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbg6w\" (UniqueName: \"kubernetes.io/projected/85276cf5-13f2-4890-9d08-07f5e01dc90c-kube-api-access-dbg6w\") pod \"test-operator-controller-manager-5854674fcc-s7rrn\" (UID: \"85276cf5-13f2-4890-9d08-07f5e01dc90c\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.918627 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.918720 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rffbz\" (UniqueName: \"kubernetes.io/projected/8796eb82-e5f1-4ee0-90de-ee42e6010e0d-kube-api-access-rffbz\") pod \"watcher-operator-controller-manager-667bd8d554-qxpbb\" (UID: \"8796eb82-e5f1-4ee0-90de-ee42e6010e0d\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" Dec 09 17:17:02 crc kubenswrapper[4853]: E1209 17:17:02.919536 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:02 crc kubenswrapper[4853]: E1209 17:17:02.919632 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert podName:a4ed8e4a-54de-45d2-962c-7fdbfd49b302 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:03.919587191 +0000 UTC m=+1250.854326453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert") pod "infra-operator-controller-manager-78d48bff9d-lf994" (UID: "a4ed8e4a-54de-45d2-962c-7fdbfd49b302") : secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.921515 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.923731 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zk4p7" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.923870 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.934191 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.943146 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsnkj\" (UniqueName: \"kubernetes.io/projected/234b75e2-2793-4ec3-ab45-c3603ae69436-kube-api-access-rsnkj\") pod \"telemetry-operator-controller-manager-796785f986-g89lr\" (UID: \"234b75e2-2793-4ec3-ab45-c3603ae69436\") " pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.950877 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.952424 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjxsj\" (UniqueName: \"kubernetes.io/projected/eff79899-1ea3-418a-86fc-f988303b6da5-kube-api-access-bjxsj\") pod \"swift-operator-controller-manager-9d58d64bc-9bq2r\" (UID: \"eff79899-1ea3-418a-86fc-f988303b6da5\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.953173 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.953841 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbg6w\" (UniqueName: \"kubernetes.io/projected/85276cf5-13f2-4890-9d08-07f5e01dc90c-kube-api-access-dbg6w\") pod \"test-operator-controller-manager-5854674fcc-s7rrn\" (UID: \"85276cf5-13f2-4890-9d08-07f5e01dc90c\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.960572 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.960676 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-ntpdt" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.961403 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.973885 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987"] Dec 09 17:17:02 crc kubenswrapper[4853]: I1209 17:17:02.991068 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf"] Dec 09 17:17:02 crc kubenswrapper[4853]: W1209 17:17:02.999573 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod967443dd_77e8_4090_a90e_c7e5f2152acb.slice/crio-eced87e48ad412f4f3c095b6405983a499ea9cec86891b4e52b53ece08d91ab8 WatchSource:0}: Error finding container eced87e48ad412f4f3c095b6405983a499ea9cec86891b4e52b53ece08d91ab8: Status 404 returned error can't find the container with id eced87e48ad412f4f3c095b6405983a499ea9cec86891b4e52b53ece08d91ab8 Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.002350 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt"] Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.020369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rffbz\" (UniqueName: \"kubernetes.io/projected/8796eb82-e5f1-4ee0-90de-ee42e6010e0d-kube-api-access-rffbz\") pod \"watcher-operator-controller-manager-667bd8d554-qxpbb\" (UID: \"8796eb82-e5f1-4ee0-90de-ee42e6010e0d\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.020479 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.020508 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.020571 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2dss\" (UniqueName: \"kubernetes.io/projected/f395880e-faf4-4550-aac9-9cef954c967a-kube-api-access-w2dss\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.020624 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65kzk\" (UniqueName: \"kubernetes.io/projected/6869667f-ac77-482d-b8c1-7ee9d7525c59-kube-api-access-65kzk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sv987\" (UID: \"6869667f-ac77-482d-b8c1-7ee9d7525c59\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.052732 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.061492 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rffbz\" (UniqueName: \"kubernetes.io/projected/8796eb82-e5f1-4ee0-90de-ee42e6010e0d-kube-api-access-rffbz\") pod \"watcher-operator-controller-manager-667bd8d554-qxpbb\" (UID: \"8796eb82-e5f1-4ee0-90de-ee42e6010e0d\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.078513 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.101048 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.123627 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.123961 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.124066 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2dss\" (UniqueName: \"kubernetes.io/projected/f395880e-faf4-4550-aac9-9cef954c967a-kube-api-access-w2dss\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.124102 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65kzk\" (UniqueName: \"kubernetes.io/projected/6869667f-ac77-482d-b8c1-7ee9d7525c59-kube-api-access-65kzk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sv987\" (UID: \"6869667f-ac77-482d-b8c1-7ee9d7525c59\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.124685 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.125221 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.126138 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.127541 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:03.624731718 +0000 UTC m=+1250.559470900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "metrics-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.127571 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:03.627555976 +0000 UTC m=+1250.562295158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "webhook-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.148150 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65kzk\" (UniqueName: \"kubernetes.io/projected/6869667f-ac77-482d-b8c1-7ee9d7525c59-kube-api-access-65kzk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sv987\" (UID: \"6869667f-ac77-482d-b8c1-7ee9d7525c59\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.162235 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2dss\" (UniqueName: \"kubernetes.io/projected/f395880e-faf4-4550-aac9-9cef954c967a-kube-api-access-w2dss\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.183514 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.193174 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-6j847"] Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.265384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" event={"ID":"967443dd-77e8-4090-a90e-c7e5f2152acb","Type":"ContainerStarted","Data":"eced87e48ad412f4f3c095b6405983a499ea9cec86891b4e52b53ece08d91ab8"} Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.287168 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" Dec 09 17:17:03 crc kubenswrapper[4853]: W1209 17:17:03.291247 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcd1cd96_4a49_4b3d_94f1_df1bae0cf3bf.slice/crio-e683e1c1a03dc087a2937fb17e80933539fadfde8cf4c8dfcc024d7aea64f87d WatchSource:0}: Error finding container e683e1c1a03dc087a2937fb17e80933539fadfde8cf4c8dfcc024d7aea64f87d: Status 404 returned error can't find the container with id e683e1c1a03dc087a2937fb17e80933539fadfde8cf4c8dfcc024d7aea64f87d Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.291334 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" event={"ID":"29813971-d50f-4186-88c8-380d54284514","Type":"ContainerStarted","Data":"555965e5c87814326bf8cbed23b888ec1c564d4d7862e0bd569fb553ccfde21d"} Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.330095 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.330229 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.330359 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert podName:d2aeb9ff-da65-4fc1-8362-29c263f9f4c3 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:04.330343606 +0000 UTC m=+1251.265082788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fcmbqh" (UID: "d2aeb9ff-da65-4fc1-8362-29c263f9f4c3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.350939 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s"] Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.634643 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.634986 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.635194 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.635249 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:04.635230107 +0000 UTC m=+1251.569969279 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "webhook-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.635617 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.635659 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:04.635648908 +0000 UTC m=+1251.570388090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "metrics-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.728398 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2"] Dec 09 17:17:03 crc kubenswrapper[4853]: W1209 17:17:03.733087 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod716e46e2_2382_4568_956b_eb55e54cbc92.slice/crio-a0a3573f3acb0d6ae2b004a251f096e9970e01467eb152299197b1b9b474d6e4 WatchSource:0}: Error finding container a0a3573f3acb0d6ae2b004a251f096e9970e01467eb152299197b1b9b474d6e4: Status 404 returned error can't find the container with id a0a3573f3acb0d6ae2b004a251f096e9970e01467eb152299197b1b9b474d6e4 Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.773861 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl"] Dec 09 17:17:03 crc kubenswrapper[4853]: W1209 17:17:03.781154 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3861e360_3725_49dd_9201_9efb6bcaf978.slice/crio-017590f1019c327159c622dd8263c10d718babd6ae027dee6c999dfbc6a81603 WatchSource:0}: Error finding container 017590f1019c327159c622dd8263c10d718babd6ae027dee6c999dfbc6a81603: Status 404 returned error can't find the container with id 017590f1019c327159c622dd8263c10d718babd6ae027dee6c999dfbc6a81603 Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.788232 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh"] Dec 09 17:17:03 crc kubenswrapper[4853]: W1209 17:17:03.788889 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27577524_15bd_403c_9a4f_a693e212b9d3.slice/crio-34235f4d6e8bb9eab31544fcdf759d172dad2dbd44f48e0206f7581fdde0ffdf WatchSource:0}: Error finding container 34235f4d6e8bb9eab31544fcdf759d172dad2dbd44f48e0206f7581fdde0ffdf: Status 404 returned error can't find the container with id 34235f4d6e8bb9eab31544fcdf759d172dad2dbd44f48e0206f7581fdde0ffdf Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.941054 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.941271 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: E1209 17:17:03.941323 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert podName:a4ed8e4a-54de-45d2-962c-7fdbfd49b302 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:05.94130936 +0000 UTC m=+1252.876048542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert") pod "infra-operator-controller-manager-78d48bff9d-lf994" (UID: "a4ed8e4a-54de-45d2-962c-7fdbfd49b302") : secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.947404 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w"] Dec 09 17:17:03 crc kubenswrapper[4853]: I1209 17:17:03.960389 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h"] Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.119124 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj"] Dec 09 17:17:04 crc kubenswrapper[4853]: W1209 17:17:04.137704 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod808c6195_03f9_4f53_8cc0_8a70dc0d9588.slice/crio-04dd9aae8817c01f43e3915f82b544d01551250d8f96a498329bda74a845cbcf WatchSource:0}: Error finding container 04dd9aae8817c01f43e3915f82b544d01551250d8f96a498329bda74a845cbcf: Status 404 returned error can't find the container with id 04dd9aae8817c01f43e3915f82b544d01551250d8f96a498329bda74a845cbcf Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.300996 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" event={"ID":"565b5b04-34ef-414f-8316-0b6ea0f7835e","Type":"ContainerStarted","Data":"5af7cf717f635c805899dfb81ec7287e02bdfa06d0fbcd1285120ed501564247"} Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.303156 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" event={"ID":"27577524-15bd-403c-9a4f-a693e212b9d3","Type":"ContainerStarted","Data":"34235f4d6e8bb9eab31544fcdf759d172dad2dbd44f48e0206f7581fdde0ffdf"} Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.304140 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" event={"ID":"4f498f4a-152d-4c28-85b6-71fdeb32d148","Type":"ContainerStarted","Data":"e4f90354beeff76c5e6b36935dbe7efd9d97bc587ad94b4468a6410151e5f240"} Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.307822 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" event={"ID":"716e46e2-2382-4568-956b-eb55e54cbc92","Type":"ContainerStarted","Data":"a0a3573f3acb0d6ae2b004a251f096e9970e01467eb152299197b1b9b474d6e4"} Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.309136 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" event={"ID":"808c6195-03f9-4f53-8cc0-8a70dc0d9588","Type":"ContainerStarted","Data":"04dd9aae8817c01f43e3915f82b544d01551250d8f96a498329bda74a845cbcf"} Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.311251 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" event={"ID":"317f5d16-66f6-42fb-b6b7-01ad51915f20","Type":"ContainerStarted","Data":"e1f94e136f1300a9b18f4f09c27c7a57816e4b58dfa35443720ad116cd3516fb"} Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.312866 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" event={"ID":"fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf","Type":"ContainerStarted","Data":"e683e1c1a03dc087a2937fb17e80933539fadfde8cf4c8dfcc024d7aea64f87d"} Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.314868 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" event={"ID":"3861e360-3725-49dd-9201-9efb6bcaf978","Type":"ContainerStarted","Data":"017590f1019c327159c622dd8263c10d718babd6ae027dee6c999dfbc6a81603"} Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.346448 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:04 crc kubenswrapper[4853]: E1209 17:17:04.346718 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:04 crc kubenswrapper[4853]: E1209 17:17:04.346806 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert podName:d2aeb9ff-da65-4fc1-8362-29c263f9f4c3 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:06.346784969 +0000 UTC m=+1253.281524151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fcmbqh" (UID: "d2aeb9ff-da65-4fc1-8362-29c263f9f4c3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.638459 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-77blt"] Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.650925 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.650975 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:04 crc kubenswrapper[4853]: E1209 17:17:04.651680 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 17:17:04 crc kubenswrapper[4853]: E1209 17:17:04.651748 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:06.651726771 +0000 UTC m=+1253.586465953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "webhook-server-cert" not found Dec 09 17:17:04 crc kubenswrapper[4853]: E1209 17:17:04.651813 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 17:17:04 crc kubenswrapper[4853]: E1209 17:17:04.651840 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:06.651831004 +0000 UTC m=+1253.586570186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "metrics-server-cert" not found Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.675120 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987"] Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.685876 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956"] Dec 09 17:17:04 crc kubenswrapper[4853]: W1209 17:17:04.698110 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff79899_1ea3_418a_86fc_f988303b6da5.slice/crio-17e6599ff0033e9ff71c2d324620d2b4aca3d68a164d2c705097ca538bc72e29 WatchSource:0}: Error finding container 17e6599ff0033e9ff71c2d324620d2b4aca3d68a164d2c705097ca538bc72e29: Status 404 returned error can't find the container with id 17e6599ff0033e9ff71c2d324620d2b4aca3d68a164d2c705097ca538bc72e29 Dec 09 17:17:04 crc kubenswrapper[4853]: W1209 17:17:04.701737 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a7fce2f_b4b3_4f6c_b417_aa159e161722.slice/crio-ac511dfadc8106818f91dd40ff8a510517a3824a9322aa212991abc872e546c9 WatchSource:0}: Error finding container ac511dfadc8106818f91dd40ff8a510517a3824a9322aa212991abc872e546c9: Status 404 returned error can't find the container with id ac511dfadc8106818f91dd40ff8a510517a3824a9322aa212991abc872e546c9 Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.702904 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr"] Dec 09 17:17:04 crc kubenswrapper[4853]: W1209 17:17:04.717804 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9f17b0a_4f03_460b_b0e9_743882aa435e.slice/crio-6e29b0a8ff017c97a23e8d29ba5cca3d6300f400413a0c40fb998fbe304acb35 WatchSource:0}: Error finding container 6e29b0a8ff017c97a23e8d29ba5cca3d6300f400413a0c40fb998fbe304acb35: Status 404 returned error can't find the container with id 6e29b0a8ff017c97a23e8d29ba5cca3d6300f400413a0c40fb998fbe304acb35 Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.717887 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf"] Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.727068 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7"] Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.734045 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn"] Dec 09 17:17:04 crc kubenswrapper[4853]: E1209 17:17:04.736209 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkc5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-cgkl7_openstack-operators(e9f17b0a-4f03-460b-b0e9-743882aa435e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 17:17:04 crc kubenswrapper[4853]: E1209 17:17:04.738128 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkc5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-cgkl7_openstack-operators(e9f17b0a-4f03-460b-b0e9-743882aa435e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 17:17:04 crc kubenswrapper[4853]: E1209 17:17:04.739851 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" podUID="e9f17b0a-4f03-460b-b0e9-743882aa435e" Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.740992 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r"] Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.755218 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh"] Dec 09 17:17:04 crc kubenswrapper[4853]: I1209 17:17:04.783159 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb"] Dec 09 17:17:04 crc kubenswrapper[4853]: W1209 17:17:04.817749 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8796eb82_e5f1_4ee0_90de_ee42e6010e0d.slice/crio-72776460f88e5c3edbcbecc59c732d947fa65b6e3a11bdc84e93c4d71bd310aa WatchSource:0}: Error finding container 72776460f88e5c3edbcbecc59c732d947fa65b6e3a11bdc84e93c4d71bd310aa: Status 404 returned error can't find the container with id 72776460f88e5c3edbcbecc59c732d947fa65b6e3a11bdc84e93c4d71bd310aa Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.324962 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" event={"ID":"8c543f97-25ca-48a7-8b42-120884dee80b","Type":"ContainerStarted","Data":"d838914e5a847c1ae16d7544af395b99f3a77db01853576b968f52831dab8985"} Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.327676 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" event={"ID":"eff79899-1ea3-418a-86fc-f988303b6da5","Type":"ContainerStarted","Data":"17e6599ff0033e9ff71c2d324620d2b4aca3d68a164d2c705097ca538bc72e29"} Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.328979 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" event={"ID":"85276cf5-13f2-4890-9d08-07f5e01dc90c","Type":"ContainerStarted","Data":"e0e0a7f1106f72e48d755e6c184212fd6d271735e46444854ac3bc5d7c8262e5"} Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.330386 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" event={"ID":"8796eb82-e5f1-4ee0-90de-ee42e6010e0d","Type":"ContainerStarted","Data":"72776460f88e5c3edbcbecc59c732d947fa65b6e3a11bdc84e93c4d71bd310aa"} Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.332405 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" event={"ID":"e9f17b0a-4f03-460b-b0e9-743882aa435e","Type":"ContainerStarted","Data":"6e29b0a8ff017c97a23e8d29ba5cca3d6300f400413a0c40fb998fbe304acb35"} Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.332581 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" event={"ID":"6869667f-ac77-482d-b8c1-7ee9d7525c59","Type":"ContainerStarted","Data":"19e495741963420f936ca89d0df757e73dcbcf9cc8e6c0f4ba0cb0453954bf37"} Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.336513 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" event={"ID":"0a7fce2f-b4b3-4f6c-b417-aa159e161722","Type":"ContainerStarted","Data":"ac511dfadc8106818f91dd40ff8a510517a3824a9322aa212991abc872e546c9"} Dec 09 17:17:05 crc kubenswrapper[4853]: E1209 17:17:05.336776 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" podUID="e9f17b0a-4f03-460b-b0e9-743882aa435e" Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.338235 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" event={"ID":"234b75e2-2793-4ec3-ab45-c3603ae69436","Type":"ContainerStarted","Data":"57ef0a630497f6156fe45bedd30f44973e9d8831c8c7b91183dd05302457af21"} Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.341360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" event={"ID":"da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a","Type":"ContainerStarted","Data":"12021df11bdc3cbfe39da9609302d10b1dbd2544faab6cea3840022da55c5451"} Dec 09 17:17:05 crc kubenswrapper[4853]: I1209 17:17:05.346926 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" event={"ID":"7f4def87-3330-40cc-863c-f6bfe07e9c2d","Type":"ContainerStarted","Data":"b4347fb556caaebf724b3c756b6c2f2d4fd78161da572071ce97e7a2f24221ed"} Dec 09 17:17:06 crc kubenswrapper[4853]: I1209 17:17:06.041842 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:06 crc kubenswrapper[4853]: E1209 17:17:06.042420 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:06 crc kubenswrapper[4853]: E1209 17:17:06.042485 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert podName:a4ed8e4a-54de-45d2-962c-7fdbfd49b302 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:10.042468334 +0000 UTC m=+1256.977207516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert") pod "infra-operator-controller-manager-78d48bff9d-lf994" (UID: "a4ed8e4a-54de-45d2-962c-7fdbfd49b302") : secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:06 crc kubenswrapper[4853]: E1209 17:17:06.369946 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" podUID="e9f17b0a-4f03-460b-b0e9-743882aa435e" Dec 09 17:17:06 crc kubenswrapper[4853]: I1209 17:17:06.379476 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:06 crc kubenswrapper[4853]: E1209 17:17:06.379775 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:06 crc kubenswrapper[4853]: E1209 17:17:06.379856 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert podName:d2aeb9ff-da65-4fc1-8362-29c263f9f4c3 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:10.379833798 +0000 UTC m=+1257.314573050 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fcmbqh" (UID: "d2aeb9ff-da65-4fc1-8362-29c263f9f4c3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:06 crc kubenswrapper[4853]: I1209 17:17:06.683984 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:06 crc kubenswrapper[4853]: I1209 17:17:06.684044 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:06 crc kubenswrapper[4853]: E1209 17:17:06.684192 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 17:17:06 crc kubenswrapper[4853]: E1209 17:17:06.684247 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:10.684229324 +0000 UTC m=+1257.618968506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "webhook-server-cert" not found Dec 09 17:17:06 crc kubenswrapper[4853]: E1209 17:17:06.684337 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 17:17:06 crc kubenswrapper[4853]: E1209 17:17:06.684377 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:10.684368638 +0000 UTC m=+1257.619107820 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "metrics-server-cert" not found Dec 09 17:17:10 crc kubenswrapper[4853]: I1209 17:17:10.125468 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:10 crc kubenswrapper[4853]: E1209 17:17:10.125698 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:10 crc kubenswrapper[4853]: E1209 17:17:10.126248 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert podName:a4ed8e4a-54de-45d2-962c-7fdbfd49b302 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:18.126225044 +0000 UTC m=+1265.060964226 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert") pod "infra-operator-controller-manager-78d48bff9d-lf994" (UID: "a4ed8e4a-54de-45d2-962c-7fdbfd49b302") : secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:10 crc kubenswrapper[4853]: I1209 17:17:10.431562 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:10 crc kubenswrapper[4853]: E1209 17:17:10.431788 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:10 crc kubenswrapper[4853]: E1209 17:17:10.431870 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert podName:d2aeb9ff-da65-4fc1-8362-29c263f9f4c3 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:18.431849355 +0000 UTC m=+1265.366588537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fcmbqh" (UID: "d2aeb9ff-da65-4fc1-8362-29c263f9f4c3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:10 crc kubenswrapper[4853]: I1209 17:17:10.738057 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:10 crc kubenswrapper[4853]: I1209 17:17:10.738138 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:10 crc kubenswrapper[4853]: E1209 17:17:10.738318 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 17:17:10 crc kubenswrapper[4853]: E1209 17:17:10.738436 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:18.738405502 +0000 UTC m=+1265.673144684 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "metrics-server-cert" not found Dec 09 17:17:10 crc kubenswrapper[4853]: E1209 17:17:10.738433 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 17:17:10 crc kubenswrapper[4853]: E1209 17:17:10.738486 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:18.738479414 +0000 UTC m=+1265.673218596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "webhook-server-cert" not found Dec 09 17:17:18 crc kubenswrapper[4853]: I1209 17:17:18.136323 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:18 crc kubenswrapper[4853]: E1209 17:17:18.136526 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:18 crc kubenswrapper[4853]: E1209 17:17:18.136978 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert podName:a4ed8e4a-54de-45d2-962c-7fdbfd49b302 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:34.136953463 +0000 UTC m=+1281.071692655 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert") pod "infra-operator-controller-manager-78d48bff9d-lf994" (UID: "a4ed8e4a-54de-45d2-962c-7fdbfd49b302") : secret "infra-operator-webhook-server-cert" not found Dec 09 17:17:18 crc kubenswrapper[4853]: I1209 17:17:18.441855 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:18 crc kubenswrapper[4853]: E1209 17:17:18.442115 4853 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:18 crc kubenswrapper[4853]: E1209 17:17:18.442242 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert podName:d2aeb9ff-da65-4fc1-8362-29c263f9f4c3 nodeName:}" failed. No retries permitted until 2025-12-09 17:17:34.442209564 +0000 UTC m=+1281.376948786 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fcmbqh" (UID: "d2aeb9ff-da65-4fc1-8362-29c263f9f4c3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 17:17:18 crc kubenswrapper[4853]: I1209 17:17:18.747587 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:18 crc kubenswrapper[4853]: I1209 17:17:18.747656 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:18 crc kubenswrapper[4853]: E1209 17:17:18.747778 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 17:17:18 crc kubenswrapper[4853]: E1209 17:17:18.747779 4853 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 17:17:18 crc kubenswrapper[4853]: E1209 17:17:18.747851 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:34.747834315 +0000 UTC m=+1281.682573497 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "webhook-server-cert" not found Dec 09 17:17:18 crc kubenswrapper[4853]: E1209 17:17:18.747866 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs podName:f395880e-faf4-4550-aac9-9cef954c967a nodeName:}" failed. No retries permitted until 2025-12-09 17:17:34.747860665 +0000 UTC m=+1281.682599847 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs") pod "openstack-operator-controller-manager-866b78c4d6-vrq2f" (UID: "f395880e-faf4-4550-aac9-9cef954c967a") : secret "metrics-server-cert" not found Dec 09 17:17:20 crc kubenswrapper[4853]: E1209 17:17:20.819779 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 09 17:17:20 crc kubenswrapper[4853]: E1209 17:17:20.820433 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbg6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-s7rrn_openstack-operators(85276cf5-13f2-4890-9d08-07f5e01dc90c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:24 crc kubenswrapper[4853]: E1209 17:17:24.552899 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 09 17:17:24 crc kubenswrapper[4853]: E1209 17:17:24.553702 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lgw2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-bj2bf_openstack-operators(29813971-d50f-4186-88c8-380d54284514): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:26 crc kubenswrapper[4853]: E1209 17:17:26.455380 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 09 17:17:26 crc kubenswrapper[4853]: E1209 17:17:26.455573 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crbtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-jq42w_openstack-operators(317f5d16-66f6-42fb-b6b7-01ad51915f20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:26 crc kubenswrapper[4853]: E1209 17:17:26.978002 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 09 17:17:26 crc kubenswrapper[4853]: E1209 17:17:26.978753 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dx5wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-qtnbj_openstack-operators(808c6195-03f9-4f53-8cc0-8a70dc0d9588): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:30 crc kubenswrapper[4853]: E1209 17:17:30.470015 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 09 17:17:30 crc kubenswrapper[4853]: E1209 17:17:30.470803 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q8bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-vhm5h_openstack-operators(4f498f4a-152d-4c28-85b6-71fdeb32d148): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:31 crc kubenswrapper[4853]: E1209 17:17:31.056735 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3" Dec 09 17:17:31 crc kubenswrapper[4853]: E1209 17:17:31.056944 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2p9rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-xzhzt_openstack-operators(967443dd-77e8-4090-a90e-c7e5f2152acb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:31 crc kubenswrapper[4853]: E1209 17:17:31.592748 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 09 17:17:31 crc kubenswrapper[4853]: E1209 17:17:31.592949 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjxsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-9bq2r_openstack-operators(eff79899-1ea3-418a-86fc-f988303b6da5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.201451 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.227409 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4ed8e4a-54de-45d2-962c-7fdbfd49b302-cert\") pod \"infra-operator-controller-manager-78d48bff9d-lf994\" (UID: \"a4ed8e4a-54de-45d2-962c-7fdbfd49b302\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:34 crc kubenswrapper[4853]: E1209 17:17:34.431122 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 09 17:17:34 crc kubenswrapper[4853]: E1209 17:17:34.431448 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4bswg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-qvpfh_openstack-operators(27577524-15bd-403c-9a4f-a693e212b9d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.442288 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-tr5hj" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.451211 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.507959 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.516459 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d2aeb9ff-da65-4fc1-8362-29c263f9f4c3-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fcmbqh\" (UID: \"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.528014 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-29rt5" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.540731 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.812420 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.812470 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.817226 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-metrics-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:34 crc kubenswrapper[4853]: I1209 17:17:34.827476 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f395880e-faf4-4550-aac9-9cef954c967a-webhook-certs\") pod \"openstack-operator-controller-manager-866b78c4d6-vrq2f\" (UID: \"f395880e-faf4-4550-aac9-9cef954c967a\") " pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:35 crc kubenswrapper[4853]: E1209 17:17:35.027092 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 09 17:17:35 crc kubenswrapper[4853]: E1209 17:17:35.027316 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kp6dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-nfkqh_openstack-operators(0a7fce2f-b4b3-4f6c-b417-aa159e161722): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:35 crc kubenswrapper[4853]: I1209 17:17:35.053306 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zk4p7" Dec 09 17:17:35 crc kubenswrapper[4853]: I1209 17:17:35.062734 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:37 crc kubenswrapper[4853]: E1209 17:17:37.421519 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad" Dec 09 17:17:37 crc kubenswrapper[4853]: E1209 17:17:37.422040 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5bm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-6c956_openstack-operators(da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:38 crc kubenswrapper[4853]: E1209 17:17:38.066235 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8" Dec 09 17:17:38 crc kubenswrapper[4853]: E1209 17:17:38.066941 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rffbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-qxpbb_openstack-operators(8796eb82-e5f1-4ee0-90de-ee42e6010e0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:38 crc kubenswrapper[4853]: E1209 17:17:38.181800 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1" Dec 09 17:17:38 crc kubenswrapper[4853]: E1209 17:17:38.181866 4853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.23:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1" Dec 09 17:17:38 crc kubenswrapper[4853]: E1209 17:17:38.182019 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.23:5001/openstack-k8s-operators/telemetry-operator:c4794e7165126ca78a1af546bb4ba50c90b5c4e1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rsnkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-796785f986-g89lr_openstack-operators(234b75e2-2793-4ec3-ab45-c3603ae69436): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:38 crc kubenswrapper[4853]: E1209 17:17:38.716307 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 09 17:17:38 crc kubenswrapper[4853]: E1209 17:17:38.716859 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkc5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-cgkl7_openstack-operators(e9f17b0a-4f03-460b-b0e9-743882aa435e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:39 crc kubenswrapper[4853]: E1209 17:17:39.155096 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 09 17:17:39 crc kubenswrapper[4853]: E1209 17:17:39.155282 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65kzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sv987_openstack-operators(6869667f-ac77-482d-b8c1-7ee9d7525c59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:39 crc kubenswrapper[4853]: E1209 17:17:39.156445 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" podUID="6869667f-ac77-482d-b8c1-7ee9d7525c59" Dec 09 17:17:39 crc kubenswrapper[4853]: E1209 17:17:39.768004 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" podUID="6869667f-ac77-482d-b8c1-7ee9d7525c59" Dec 09 17:17:42 crc kubenswrapper[4853]: E1209 17:17:42.171989 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 09 17:17:42 crc kubenswrapper[4853]: E1209 17:17:42.172400 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wd996,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-kgtxf_openstack-operators(8c543f97-25ca-48a7-8b42-120884dee80b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:17:42 crc kubenswrapper[4853]: I1209 17:17:42.782586 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh"] Dec 09 17:17:43 crc kubenswrapper[4853]: I1209 17:17:43.099902 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994"] Dec 09 17:17:43 crc kubenswrapper[4853]: I1209 17:17:43.107925 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f"] Dec 09 17:17:44 crc kubenswrapper[4853]: E1209 17:17:44.594307 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading blob sha256:960043b8858c3c30f1d79dcc49adb2804fd35c2510729e67685b298b2ca746b7: fetching blob: received unexpected HTTP status: 502 Bad Gateway" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 17:17:44 crc kubenswrapper[4853]: E1209 17:17:44.594768 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crbtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-jq42w_openstack-operators(317f5d16-66f6-42fb-b6b7-01ad51915f20): ErrImagePull: reading blob sha256:960043b8858c3c30f1d79dcc49adb2804fd35c2510729e67685b298b2ca746b7: fetching blob: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Dec 09 17:17:44 crc kubenswrapper[4853]: E1209 17:17:44.596022 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"reading blob sha256:960043b8858c3c30f1d79dcc49adb2804fd35c2510729e67685b298b2ca746b7: fetching blob: received unexpected HTTP status: 502 Bad Gateway\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" podUID="317f5d16-66f6-42fb-b6b7-01ad51915f20" Dec 09 17:17:44 crc kubenswrapper[4853]: I1209 17:17:44.809630 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" event={"ID":"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3","Type":"ContainerStarted","Data":"862b408a8adb85e1fe2b971cfccf3a23ebc0c7180cfb90751e945d257bc946cc"} Dec 09 17:17:44 crc kubenswrapper[4853]: I1209 17:17:44.811402 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" event={"ID":"a4ed8e4a-54de-45d2-962c-7fdbfd49b302","Type":"ContainerStarted","Data":"361265ace0e36cdb3362990ccc85cb6cb55c16ad551e1a3e09404caefd14c90b"} Dec 09 17:17:44 crc kubenswrapper[4853]: I1209 17:17:44.813197 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" event={"ID":"565b5b04-34ef-414f-8316-0b6ea0f7835e","Type":"ContainerStarted","Data":"334c21ca2e0c3daae2c968f727722fbc15fb0543f58619628a1bbbda69413d11"} Dec 09 17:17:44 crc kubenswrapper[4853]: I1209 17:17:44.814501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" event={"ID":"f395880e-faf4-4550-aac9-9cef954c967a","Type":"ContainerStarted","Data":"c8e7c15d77c18fae7085eee14cb1572a0b478e99a22b25f6b6a98b7056e89bfa"} Dec 09 17:17:45 crc kubenswrapper[4853]: I1209 17:17:45.827662 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" event={"ID":"f395880e-faf4-4550-aac9-9cef954c967a","Type":"ContainerStarted","Data":"378e89324faf99e3dc1aa2e37b12bfb2171c1a1824c9d180f3e73e8ff5415ef1"} Dec 09 17:17:45 crc kubenswrapper[4853]: I1209 17:17:45.827941 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:45 crc kubenswrapper[4853]: I1209 17:17:45.831669 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" event={"ID":"716e46e2-2382-4568-956b-eb55e54cbc92","Type":"ContainerStarted","Data":"34812b500c927b76f9ee1ad19ed12367201493d55c41d2367f98df6399e1819f"} Dec 09 17:17:45 crc kubenswrapper[4853]: I1209 17:17:45.834867 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" event={"ID":"fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf","Type":"ContainerStarted","Data":"6e4cdae98e113294ecff8b376fe0290ba5207d9bbdef7865f4585013012cecdb"} Dec 09 17:17:45 crc kubenswrapper[4853]: I1209 17:17:45.837069 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" event={"ID":"3861e360-3725-49dd-9201-9efb6bcaf978","Type":"ContainerStarted","Data":"29629d2f50f1040ffc81e72118d0e68c7432b5e68aa9769bb47ab770f4e7032b"} Dec 09 17:17:45 crc kubenswrapper[4853]: I1209 17:17:45.839544 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" event={"ID":"7f4def87-3330-40cc-863c-f6bfe07e9c2d","Type":"ContainerStarted","Data":"2c1be7452382dea5b5c88e1c606615ece0d852bca713f383503438d2ddfc9789"} Dec 09 17:17:45 crc kubenswrapper[4853]: I1209 17:17:45.866157 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" podStartSLOduration=43.866140122 podStartE2EDuration="43.866140122s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:17:45.857535503 +0000 UTC m=+1292.792274685" watchObservedRunningTime="2025-12-09 17:17:45.866140122 +0000 UTC m=+1292.800879294" Dec 09 17:17:48 crc kubenswrapper[4853]: E1209 17:17:48.024837 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" podUID="85276cf5-13f2-4890-9d08-07f5e01dc90c" Dec 09 17:17:48 crc kubenswrapper[4853]: E1209 17:17:48.119904 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" podUID="8c543f97-25ca-48a7-8b42-120884dee80b" Dec 09 17:17:48 crc kubenswrapper[4853]: E1209 17:17:48.268540 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" podUID="808c6195-03f9-4f53-8cc0-8a70dc0d9588" Dec 09 17:17:48 crc kubenswrapper[4853]: E1209 17:17:48.280495 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" podUID="967443dd-77e8-4090-a90e-c7e5f2152acb" Dec 09 17:17:48 crc kubenswrapper[4853]: E1209 17:17:48.335379 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" podUID="234b75e2-2793-4ec3-ab45-c3603ae69436" Dec 09 17:17:48 crc kubenswrapper[4853]: E1209 17:17:48.716120 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" podUID="8796eb82-e5f1-4ee0-90de-ee42e6010e0d" Dec 09 17:17:48 crc kubenswrapper[4853]: E1209 17:17:48.797341 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" podUID="da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a" Dec 09 17:17:48 crc kubenswrapper[4853]: E1209 17:17:48.808713 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" podUID="0a7fce2f-b4b3-4f6c-b417-aa159e161722" Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.878304 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" event={"ID":"fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf","Type":"ContainerStarted","Data":"b187768f6a904da006011d314dba351073ab538c68e2272056dbaf0dc4980e77"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.878891 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.880695 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" event={"ID":"7f4def87-3330-40cc-863c-f6bfe07e9c2d","Type":"ContainerStarted","Data":"0e2f2ca022841e33029872411bc4140b5f326bbaeb0df031940b3bf34fd68593"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.881147 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.895953 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" event={"ID":"967443dd-77e8-4090-a90e-c7e5f2152acb","Type":"ContainerStarted","Data":"10c8968fed9494b9cf15218debd769610153141e4e4fdf912c6b149b79b36092"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.903487 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" event={"ID":"234b75e2-2793-4ec3-ab45-c3603ae69436","Type":"ContainerStarted","Data":"f5f57a79d668379c65c86f5d66f0098a0c19f64027b309ecd66c19250f7a597f"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.912690 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" event={"ID":"808c6195-03f9-4f53-8cc0-8a70dc0d9588","Type":"ContainerStarted","Data":"ac1b078daecc6f41237259b9058c855e8dfd325bc31851d04149e14ab0fd0bee"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.918988 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" podStartSLOduration=2.928815363 podStartE2EDuration="47.918963947s" podCreationTimestamp="2025-12-09 17:17:01 +0000 UTC" firstStartedPulling="2025-12-09 17:17:03.318742524 +0000 UTC m=+1250.253481706" lastFinishedPulling="2025-12-09 17:17:48.308891108 +0000 UTC m=+1295.243630290" observedRunningTime="2025-12-09 17:17:48.899696761 +0000 UTC m=+1295.834435943" watchObservedRunningTime="2025-12-09 17:17:48.918963947 +0000 UTC m=+1295.853703149" Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.938258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" event={"ID":"317f5d16-66f6-42fb-b6b7-01ad51915f20","Type":"ContainerStarted","Data":"8bced5ff8e504f8c604ea753196c4d592edd4cbe36cbcac756eea9f9c2264df2"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.938306 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" event={"ID":"317f5d16-66f6-42fb-b6b7-01ad51915f20","Type":"ContainerStarted","Data":"b3d51cbeb5fc1f2a4f534d57cd1089e2d5125cf373b3882c02e2147aa2a13b0c"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.939110 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.962056 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" event={"ID":"8796eb82-e5f1-4ee0-90de-ee42e6010e0d","Type":"ContainerStarted","Data":"03fc8b017ded51fd1964d2d5218e8b2e51d3a319a33cdc09bed65e8ca4cab8a0"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.964016 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" podStartSLOduration=3.824669016 podStartE2EDuration="46.963997419s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.658005275 +0000 UTC m=+1251.592744457" lastFinishedPulling="2025-12-09 17:17:47.797333678 +0000 UTC m=+1294.732072860" observedRunningTime="2025-12-09 17:17:48.94999908 +0000 UTC m=+1295.884738262" watchObservedRunningTime="2025-12-09 17:17:48.963997419 +0000 UTC m=+1295.898736611" Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.970359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" event={"ID":"da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a","Type":"ContainerStarted","Data":"d226278e337cad4051b7b56d63ced9d4179c855a2698b25aa67992d4a0179412"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.986330 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" event={"ID":"565b5b04-34ef-414f-8316-0b6ea0f7835e","Type":"ContainerStarted","Data":"ef06fdfce14b9143eff45d6fec524e8c3a5949bb4d5f22766c80e8f8246e1008"} Dec 09 17:17:48 crc kubenswrapper[4853]: I1209 17:17:48.987009 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" Dec 09 17:17:49 crc kubenswrapper[4853]: I1209 17:17:49.008919 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" event={"ID":"0a7fce2f-b4b3-4f6c-b417-aa159e161722","Type":"ContainerStarted","Data":"6553f534d6ce397ee863f0f1189c92d1fc4840ed1ec3593e4eaa522f32e81239"} Dec 09 17:17:49 crc kubenswrapper[4853]: I1209 17:17:49.041168 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" event={"ID":"8c543f97-25ca-48a7-8b42-120884dee80b","Type":"ContainerStarted","Data":"d67e4aac6e4115d4d53ce028bca89c3a08c4e7d04bf9f5c0cd20e12b687a3b02"} Dec 09 17:17:49 crc kubenswrapper[4853]: E1209 17:17:49.042527 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" podUID="8c543f97-25ca-48a7-8b42-120884dee80b" Dec 09 17:17:49 crc kubenswrapper[4853]: I1209 17:17:49.044966 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" event={"ID":"85276cf5-13f2-4890-9d08-07f5e01dc90c","Type":"ContainerStarted","Data":"034ea47b37f584283c3339753fa4f4c2f1fc5db9f26481f5a2b32b0697577e31"} Dec 09 17:17:49 crc kubenswrapper[4853]: I1209 17:17:49.052548 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" podStartSLOduration=3.238592723 podStartE2EDuration="47.052531131s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:03.946844154 +0000 UTC m=+1250.881583336" lastFinishedPulling="2025-12-09 17:17:47.760782562 +0000 UTC m=+1294.695521744" observedRunningTime="2025-12-09 17:17:49.034220693 +0000 UTC m=+1295.968959885" watchObservedRunningTime="2025-12-09 17:17:49.052531131 +0000 UTC m=+1295.987270313" Dec 09 17:17:49 crc kubenswrapper[4853]: I1209 17:17:49.111833 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" podStartSLOduration=3.184360671 podStartE2EDuration="48.111818051s" podCreationTimestamp="2025-12-09 17:17:01 +0000 UTC" firstStartedPulling="2025-12-09 17:17:03.379939526 +0000 UTC m=+1250.314678708" lastFinishedPulling="2025-12-09 17:17:48.307396896 +0000 UTC m=+1295.242136088" observedRunningTime="2025-12-09 17:17:49.108142128 +0000 UTC m=+1296.042881310" watchObservedRunningTime="2025-12-09 17:17:49.111818051 +0000 UTC m=+1296.046557233" Dec 09 17:17:49 crc kubenswrapper[4853]: E1209 17:17:49.835883 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" podUID="4f498f4a-152d-4c28-85b6-71fdeb32d148" Dec 09 17:17:49 crc kubenswrapper[4853]: E1209 17:17:49.844390 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" podUID="e9f17b0a-4f03-460b-b0e9-743882aa435e" Dec 09 17:17:50 crc kubenswrapper[4853]: I1209 17:17:50.090457 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" event={"ID":"e9f17b0a-4f03-460b-b0e9-743882aa435e","Type":"ContainerStarted","Data":"8f342558f1fa0f18289367b587594051dde738dbb24e06399d0f94a71b267041"} Dec 09 17:17:50 crc kubenswrapper[4853]: I1209 17:17:50.095125 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" event={"ID":"4f498f4a-152d-4c28-85b6-71fdeb32d148","Type":"ContainerStarted","Data":"8615daad937e3e346ee0f33f12e25ad100f84b4120cc9336aeb3ba755b828887"} Dec 09 17:17:50 crc kubenswrapper[4853]: I1209 17:17:50.098787 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cxf5s" Dec 09 17:17:50 crc kubenswrapper[4853]: I1209 17:17:50.099434 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-6j847" Dec 09 17:17:50 crc kubenswrapper[4853]: E1209 17:17:50.105904 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" podUID="e9f17b0a-4f03-460b-b0e9-743882aa435e" Dec 09 17:17:50 crc kubenswrapper[4853]: E1209 17:17:50.106467 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" podUID="8c543f97-25ca-48a7-8b42-120884dee80b" Dec 09 17:17:50 crc kubenswrapper[4853]: I1209 17:17:50.106564 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-77blt" Dec 09 17:17:50 crc kubenswrapper[4853]: E1209 17:17:50.303060 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" podUID="eff79899-1ea3-418a-86fc-f988303b6da5" Dec 09 17:17:50 crc kubenswrapper[4853]: E1209 17:17:50.488033 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" podUID="29813971-d50f-4186-88c8-380d54284514" Dec 09 17:17:51 crc kubenswrapper[4853]: I1209 17:17:51.105797 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" event={"ID":"29813971-d50f-4186-88c8-380d54284514","Type":"ContainerStarted","Data":"d637db353bb0af7e6e75b6f836fcc2342f734ffc4158a48d75279018bc78e7b8"} Dec 09 17:17:51 crc kubenswrapper[4853]: I1209 17:17:51.108082 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" event={"ID":"808c6195-03f9-4f53-8cc0-8a70dc0d9588","Type":"ContainerStarted","Data":"6bc1cfa6b9fd91654392eb4e5ea480c04b464130b4608c17dbf1f7179d22ae26"} Dec 09 17:17:51 crc kubenswrapper[4853]: I1209 17:17:51.108246 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" Dec 09 17:17:51 crc kubenswrapper[4853]: I1209 17:17:51.111335 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" event={"ID":"0a7fce2f-b4b3-4f6c-b417-aa159e161722","Type":"ContainerStarted","Data":"4f7bb082ae787f20288cf410078567ed489f9546fe7a7f5b3a95bee9dc638f7b"} Dec 09 17:17:51 crc kubenswrapper[4853]: I1209 17:17:51.112002 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" Dec 09 17:17:51 crc kubenswrapper[4853]: I1209 17:17:51.114834 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" event={"ID":"eff79899-1ea3-418a-86fc-f988303b6da5","Type":"ContainerStarted","Data":"46e2470d825c6a3d21637597a46eee9b335bed27c94df1d5ef4c3a53b53b5f55"} Dec 09 17:17:51 crc kubenswrapper[4853]: I1209 17:17:51.144036 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" podStartSLOduration=3.889184141 podStartE2EDuration="49.144021517s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.707448201 +0000 UTC m=+1251.642187373" lastFinishedPulling="2025-12-09 17:17:49.962285567 +0000 UTC m=+1296.897024749" observedRunningTime="2025-12-09 17:17:51.138252976 +0000 UTC m=+1298.072992168" watchObservedRunningTime="2025-12-09 17:17:51.144021517 +0000 UTC m=+1298.078760699" Dec 09 17:17:51 crc kubenswrapper[4853]: I1209 17:17:51.178410 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" podStartSLOduration=3.350680792 podStartE2EDuration="49.178390842s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.139587876 +0000 UTC m=+1251.074327058" lastFinishedPulling="2025-12-09 17:17:49.967297936 +0000 UTC m=+1296.902037108" observedRunningTime="2025-12-09 17:17:51.167002276 +0000 UTC m=+1298.101741458" watchObservedRunningTime="2025-12-09 17:17:51.178390842 +0000 UTC m=+1298.113130024" Dec 09 17:17:52 crc kubenswrapper[4853]: E1209 17:17:52.011805 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" podUID="27577524-15bd-403c-9a4f-a693e212b9d3" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.177930 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" event={"ID":"27577524-15bd-403c-9a4f-a693e212b9d3","Type":"ContainerStarted","Data":"9ba25293dd6b012bae6697fa7dd01d3be3a79286cc81373de45634e7e7eb238e"} Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.192946 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" event={"ID":"234b75e2-2793-4ec3-ab45-c3603ae69436","Type":"ContainerStarted","Data":"d80942b07821443eb05e2be48d9a6d59c58c3f7918b59bddfdb018477cb477b0"} Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.193714 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.211835 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" event={"ID":"85276cf5-13f2-4890-9d08-07f5e01dc90c","Type":"ContainerStarted","Data":"2bc36988b39ea96b56c5b7dce4f5790a62ad53e0324f1405a2e8f64ee8d452ef"} Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.212613 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.223862 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" event={"ID":"716e46e2-2382-4568-956b-eb55e54cbc92","Type":"ContainerStarted","Data":"f0084f07c351d0c4f5fe79fe5544c45128700ecd44b7968bfcde1dd44066d56d"} Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.224952 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.225721 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" podStartSLOduration=4.922377729 podStartE2EDuration="50.225711994s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.661330568 +0000 UTC m=+1251.596069750" lastFinishedPulling="2025-12-09 17:17:49.964664833 +0000 UTC m=+1296.899404015" observedRunningTime="2025-12-09 17:17:52.225190169 +0000 UTC m=+1299.159929351" watchObservedRunningTime="2025-12-09 17:17:52.225711994 +0000 UTC m=+1299.160451176" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.252898 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.275228 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" podStartSLOduration=5.001275354 podStartE2EDuration="50.275209871s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.691168488 +0000 UTC m=+1251.625907670" lastFinishedPulling="2025-12-09 17:17:49.965102995 +0000 UTC m=+1296.899842187" observedRunningTime="2025-12-09 17:17:52.259663008 +0000 UTC m=+1299.194402190" watchObservedRunningTime="2025-12-09 17:17:52.275209871 +0000 UTC m=+1299.209949053" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.293386 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-f5fm2" podStartSLOduration=4.063499397 podStartE2EDuration="50.293366485s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:03.734712053 +0000 UTC m=+1250.669451225" lastFinishedPulling="2025-12-09 17:17:49.964579131 +0000 UTC m=+1296.899318313" observedRunningTime="2025-12-09 17:17:52.283149092 +0000 UTC m=+1299.217888274" watchObservedRunningTime="2025-12-09 17:17:52.293366485 +0000 UTC m=+1299.228105667" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.312059 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" event={"ID":"8796eb82-e5f1-4ee0-90de-ee42e6010e0d","Type":"ContainerStarted","Data":"b200cfbef35d1ccda1710c14ac92b4c44b7d755efc4646c7f16cec5f8a85fd43"} Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.313158 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.369641 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" podStartSLOduration=5.238132321 podStartE2EDuration="50.369623396s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.831013848 +0000 UTC m=+1251.765753030" lastFinishedPulling="2025-12-09 17:17:49.962504923 +0000 UTC m=+1296.897244105" observedRunningTime="2025-12-09 17:17:52.359016852 +0000 UTC m=+1299.293756034" watchObservedRunningTime="2025-12-09 17:17:52.369623396 +0000 UTC m=+1299.304362568" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.387129 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" event={"ID":"da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a","Type":"ContainerStarted","Data":"00a8d10125a9eb639d1b0ac56d5c89f90c02424e3be8c08df43a0a43390f0f61"} Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.388183 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.439146 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" event={"ID":"a4ed8e4a-54de-45d2-962c-7fdbfd49b302","Type":"ContainerStarted","Data":"835d0f214602eb1deac1f4635617f74322e6df8faaab338ad431be26fefdd679"} Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.472120 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" podStartSLOduration=5.185773285 podStartE2EDuration="50.472084907s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.678735211 +0000 UTC m=+1251.613474393" lastFinishedPulling="2025-12-09 17:17:49.965046833 +0000 UTC m=+1296.899786015" observedRunningTime="2025-12-09 17:17:52.466177932 +0000 UTC m=+1299.400917114" watchObservedRunningTime="2025-12-09 17:17:52.472084907 +0000 UTC m=+1299.406824079" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.511162 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" event={"ID":"3861e360-3725-49dd-9201-9efb6bcaf978","Type":"ContainerStarted","Data":"ca1056128f0f8e7ba6ec904cb9d13dea8f62291679be8987bc18b05606af9b1f"} Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.512806 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.520855 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" Dec 09 17:17:52 crc kubenswrapper[4853]: I1209 17:17:52.567938 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-h4nnl" podStartSLOduration=4.390510154 podStartE2EDuration="50.567918592s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:03.7838332 +0000 UTC m=+1250.718572372" lastFinishedPulling="2025-12-09 17:17:49.961241628 +0000 UTC m=+1296.895980810" observedRunningTime="2025-12-09 17:17:52.562250954 +0000 UTC m=+1299.496990136" watchObservedRunningTime="2025-12-09 17:17:52.567918592 +0000 UTC m=+1299.502657774" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.530865 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" event={"ID":"29813971-d50f-4186-88c8-380d54284514","Type":"ContainerStarted","Data":"2f5c1cee65d43db36d08079f8bb0ebc06002a8b885350ceecb7ef76185f6aa80"} Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.531394 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.552975 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" event={"ID":"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3","Type":"ContainerStarted","Data":"a2ca0b6939a8e6a2871b12f10f087a0d957a99a6fdabe3e7aebf8167982d4eef"} Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.553029 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" event={"ID":"d2aeb9ff-da65-4fc1-8362-29c263f9f4c3","Type":"ContainerStarted","Data":"d99d67018486ec8c9cad37ddbba2be91326255941d26b66bb88f2118bbc5181b"} Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.553088 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.555467 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" event={"ID":"a4ed8e4a-54de-45d2-962c-7fdbfd49b302","Type":"ContainerStarted","Data":"28c032d1d38116691b0ab1cb72c586bc17610f6d57750637fd2852560a678355"} Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.555979 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.557320 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" event={"ID":"967443dd-77e8-4090-a90e-c7e5f2152acb","Type":"ContainerStarted","Data":"822d0278f97660a629ce2c5a77add4b7955565aba97f7a0debc8632846857ca2"} Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.557779 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.559312 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" event={"ID":"27577524-15bd-403c-9a4f-a693e212b9d3","Type":"ContainerStarted","Data":"7945750ec258071fd37cc36d815763d3e36ec1a9f7d025f37da9259cf106acbc"} Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.559776 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.561071 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" event={"ID":"4f498f4a-152d-4c28-85b6-71fdeb32d148","Type":"ContainerStarted","Data":"d99bf557e9f717a55dc9ca69eb0ee60f602abda3724303774c5f0a2d620b7f35"} Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.561180 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.561737 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" podStartSLOduration=3.451832322 podStartE2EDuration="52.561726415s" podCreationTimestamp="2025-12-09 17:17:01 +0000 UTC" firstStartedPulling="2025-12-09 17:17:02.984625621 +0000 UTC m=+1249.919364803" lastFinishedPulling="2025-12-09 17:17:52.094519714 +0000 UTC m=+1299.029258896" observedRunningTime="2025-12-09 17:17:53.552832378 +0000 UTC m=+1300.487571550" watchObservedRunningTime="2025-12-09 17:17:53.561726415 +0000 UTC m=+1300.496465597" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.563247 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" event={"ID":"eff79899-1ea3-418a-86fc-f988303b6da5","Type":"ContainerStarted","Data":"35052acfd69c148b0636d77e974a25f5d9c32ec015d0646834b445893f666b77"} Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.588181 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" podStartSLOduration=44.182628125 podStartE2EDuration="51.58816284s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:44.577582471 +0000 UTC m=+1291.512321653" lastFinishedPulling="2025-12-09 17:17:51.983117186 +0000 UTC m=+1298.917856368" observedRunningTime="2025-12-09 17:17:53.578715367 +0000 UTC m=+1300.513454569" watchObservedRunningTime="2025-12-09 17:17:53.58816284 +0000 UTC m=+1300.522902022" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.613205 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" podStartSLOduration=4.335003321 podStartE2EDuration="51.613188976s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.7016729 +0000 UTC m=+1251.636412082" lastFinishedPulling="2025-12-09 17:17:51.979858555 +0000 UTC m=+1298.914597737" observedRunningTime="2025-12-09 17:17:53.611224172 +0000 UTC m=+1300.545963374" watchObservedRunningTime="2025-12-09 17:17:53.613188976 +0000 UTC m=+1300.547928158" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.637728 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" podStartSLOduration=46.274550843 podStartE2EDuration="51.637711528s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:44.601779725 +0000 UTC m=+1291.536518907" lastFinishedPulling="2025-12-09 17:17:49.96494041 +0000 UTC m=+1296.899679592" observedRunningTime="2025-12-09 17:17:53.634652543 +0000 UTC m=+1300.569391735" watchObservedRunningTime="2025-12-09 17:17:53.637711528 +0000 UTC m=+1300.572450710" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.654444 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" podStartSLOduration=3.394567022 podStartE2EDuration="51.654422473s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:03.942959066 +0000 UTC m=+1250.877698258" lastFinishedPulling="2025-12-09 17:17:52.202814527 +0000 UTC m=+1299.137553709" observedRunningTime="2025-12-09 17:17:53.653828857 +0000 UTC m=+1300.588568059" watchObservedRunningTime="2025-12-09 17:17:53.654422473 +0000 UTC m=+1300.589161655" Dec 09 17:17:53 crc kubenswrapper[4853]: I1209 17:17:53.683467 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" podStartSLOduration=3.727078096 podStartE2EDuration="52.68344592s" podCreationTimestamp="2025-12-09 17:17:01 +0000 UTC" firstStartedPulling="2025-12-09 17:17:03.025086396 +0000 UTC m=+1249.959825578" lastFinishedPulling="2025-12-09 17:17:51.98145422 +0000 UTC m=+1298.916193402" observedRunningTime="2025-12-09 17:17:53.67661178 +0000 UTC m=+1300.611350962" watchObservedRunningTime="2025-12-09 17:17:53.68344592 +0000 UTC m=+1300.618185102" Dec 09 17:17:54 crc kubenswrapper[4853]: I1209 17:17:54.573141 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" Dec 09 17:17:54 crc kubenswrapper[4853]: I1209 17:17:54.600646 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" podStartSLOduration=3.442656501 podStartE2EDuration="52.600619162s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:03.791187105 +0000 UTC m=+1250.725926297" lastFinishedPulling="2025-12-09 17:17:52.949149776 +0000 UTC m=+1299.883888958" observedRunningTime="2025-12-09 17:17:53.695433624 +0000 UTC m=+1300.630172826" watchObservedRunningTime="2025-12-09 17:17:54.600619162 +0000 UTC m=+1301.535358364" Dec 09 17:17:55 crc kubenswrapper[4853]: I1209 17:17:55.071835 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-866b78c4d6-vrq2f" Dec 09 17:17:56 crc kubenswrapper[4853]: I1209 17:17:56.595508 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" event={"ID":"6869667f-ac77-482d-b8c1-7ee9d7525c59","Type":"ContainerStarted","Data":"c206f359b27bcaeef038d7d16267b5e9fa26b170f4dc060fea16750300220b77"} Dec 09 17:17:56 crc kubenswrapper[4853]: I1209 17:17:56.629585 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv987" podStartSLOduration=3.808760843 podStartE2EDuration="54.629561287s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.701256948 +0000 UTC m=+1251.635996130" lastFinishedPulling="2025-12-09 17:17:55.522057392 +0000 UTC m=+1302.456796574" observedRunningTime="2025-12-09 17:17:56.623491168 +0000 UTC m=+1303.558230400" watchObservedRunningTime="2025-12-09 17:17:56.629561287 +0000 UTC m=+1303.564300479" Dec 09 17:18:02 crc kubenswrapper[4853]: I1209 17:18:02.270716 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-bj2bf" Dec 09 17:18:02 crc kubenswrapper[4853]: I1209 17:18:02.281668 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-xzhzt" Dec 09 17:18:02 crc kubenswrapper[4853]: I1209 17:18:02.551250 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qvpfh" Dec 09 17:18:02 crc kubenswrapper[4853]: I1209 17:18:02.721426 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-vhm5h" Dec 09 17:18:02 crc kubenswrapper[4853]: I1209 17:18:02.752673 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-qtnbj" Dec 09 17:18:02 crc kubenswrapper[4853]: I1209 17:18:02.787901 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-6c956" Dec 09 17:18:02 crc kubenswrapper[4853]: I1209 17:18:02.808581 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jq42w" Dec 09 17:18:03 crc kubenswrapper[4853]: I1209 17:18:03.057801 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nfkqh" Dec 09 17:18:03 crc kubenswrapper[4853]: I1209 17:18:03.137836 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-796785f986-g89lr" Dec 09 17:18:03 crc kubenswrapper[4853]: I1209 17:18:03.139096 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-9bq2r" Dec 09 17:18:03 crc kubenswrapper[4853]: I1209 17:18:03.140905 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-s7rrn" Dec 09 17:18:03 crc kubenswrapper[4853]: I1209 17:18:03.197995 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-qxpbb" Dec 09 17:18:04 crc kubenswrapper[4853]: I1209 17:18:04.458465 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-lf994" Dec 09 17:18:04 crc kubenswrapper[4853]: I1209 17:18:04.546518 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fcmbqh" Dec 09 17:18:05 crc kubenswrapper[4853]: I1209 17:18:05.672110 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" event={"ID":"8c543f97-25ca-48a7-8b42-120884dee80b","Type":"ContainerStarted","Data":"2d2f04dae7169fe648cf18ce66fa5766600994c37de7ceeb4aceed2c6e586d0d"} Dec 09 17:18:05 crc kubenswrapper[4853]: I1209 17:18:05.672593 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" Dec 09 17:18:05 crc kubenswrapper[4853]: I1209 17:18:05.696476 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" podStartSLOduration=3.616504726 podStartE2EDuration="1m3.696448623s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.696099455 +0000 UTC m=+1251.630838637" lastFinishedPulling="2025-12-09 17:18:04.776043352 +0000 UTC m=+1311.710782534" observedRunningTime="2025-12-09 17:18:05.688284096 +0000 UTC m=+1312.623023278" watchObservedRunningTime="2025-12-09 17:18:05.696448623 +0000 UTC m=+1312.631187805" Dec 09 17:18:06 crc kubenswrapper[4853]: I1209 17:18:06.681340 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" event={"ID":"e9f17b0a-4f03-460b-b0e9-743882aa435e","Type":"ContainerStarted","Data":"f9a9702fe3c9ea8d16cfceeef2f2c198494d074532363ab6d104f4058b39293f"} Dec 09 17:18:06 crc kubenswrapper[4853]: I1209 17:18:06.681835 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" Dec 09 17:18:06 crc kubenswrapper[4853]: I1209 17:18:06.703845 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" podStartSLOduration=3.396064884 podStartE2EDuration="1m4.703821373s" podCreationTimestamp="2025-12-09 17:17:02 +0000 UTC" firstStartedPulling="2025-12-09 17:17:04.736084387 +0000 UTC m=+1251.670823569" lastFinishedPulling="2025-12-09 17:18:06.043840876 +0000 UTC m=+1312.978580058" observedRunningTime="2025-12-09 17:18:06.700131921 +0000 UTC m=+1313.634871103" watchObservedRunningTime="2025-12-09 17:18:06.703821373 +0000 UTC m=+1313.638560575" Dec 09 17:18:12 crc kubenswrapper[4853]: I1209 17:18:12.890212 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-kgtxf" Dec 09 17:18:12 crc kubenswrapper[4853]: I1209 17:18:12.964000 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-cgkl7" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.779488 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c98v5"] Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.782052 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.785221 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.785305 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.785440 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-n4q26" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.785720 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.787626 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c98v5"] Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.842310 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7s7qf"] Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.846933 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.849693 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.860801 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7s7qf"] Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.864003 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fadee3-bd2f-4f4e-af68-b938ecbbf327-config\") pod \"dnsmasq-dns-675f4bcbfc-c98v5\" (UID: \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.864053 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7s7qf\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.864080 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9xb\" (UniqueName: \"kubernetes.io/projected/16fadee3-bd2f-4f4e-af68-b938ecbbf327-kube-api-access-gb9xb\") pod \"dnsmasq-dns-675f4bcbfc-c98v5\" (UID: \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.864124 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-config\") pod \"dnsmasq-dns-78dd6ddcc-7s7qf\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.864274 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n74pq\" (UniqueName: \"kubernetes.io/projected/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-kube-api-access-n74pq\") pod \"dnsmasq-dns-78dd6ddcc-7s7qf\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.965434 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fadee3-bd2f-4f4e-af68-b938ecbbf327-config\") pod \"dnsmasq-dns-675f4bcbfc-c98v5\" (UID: \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.965482 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7s7qf\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.965512 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb9xb\" (UniqueName: \"kubernetes.io/projected/16fadee3-bd2f-4f4e-af68-b938ecbbf327-kube-api-access-gb9xb\") pod \"dnsmasq-dns-675f4bcbfc-c98v5\" (UID: \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.965561 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-config\") pod \"dnsmasq-dns-78dd6ddcc-7s7qf\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.965748 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n74pq\" (UniqueName: \"kubernetes.io/projected/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-kube-api-access-n74pq\") pod \"dnsmasq-dns-78dd6ddcc-7s7qf\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.967121 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7s7qf\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.967132 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fadee3-bd2f-4f4e-af68-b938ecbbf327-config\") pod \"dnsmasq-dns-675f4bcbfc-c98v5\" (UID: \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.967852 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-config\") pod \"dnsmasq-dns-78dd6ddcc-7s7qf\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.988455 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n74pq\" (UniqueName: \"kubernetes.io/projected/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-kube-api-access-n74pq\") pod \"dnsmasq-dns-78dd6ddcc-7s7qf\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:36 crc kubenswrapper[4853]: I1209 17:18:36.991557 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb9xb\" (UniqueName: \"kubernetes.io/projected/16fadee3-bd2f-4f4e-af68-b938ecbbf327-kube-api-access-gb9xb\") pod \"dnsmasq-dns-675f4bcbfc-c98v5\" (UID: \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:37 crc kubenswrapper[4853]: I1209 17:18:37.104663 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:37 crc kubenswrapper[4853]: I1209 17:18:37.182735 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:37 crc kubenswrapper[4853]: I1209 17:18:37.621276 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c98v5"] Dec 09 17:18:37 crc kubenswrapper[4853]: W1209 17:18:37.691460 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdd077b2_a7e5_4421_baed_08f9ba7b17c8.slice/crio-30be8153f3cc23653c65ab6230d1c2db901b067b899da34fb173b0b7567f3aed WatchSource:0}: Error finding container 30be8153f3cc23653c65ab6230d1c2db901b067b899da34fb173b0b7567f3aed: Status 404 returned error can't find the container with id 30be8153f3cc23653c65ab6230d1c2db901b067b899da34fb173b0b7567f3aed Dec 09 17:18:37 crc kubenswrapper[4853]: I1209 17:18:37.691527 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7s7qf"] Dec 09 17:18:37 crc kubenswrapper[4853]: I1209 17:18:37.960657 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" event={"ID":"cdd077b2-a7e5-4421-baed-08f9ba7b17c8","Type":"ContainerStarted","Data":"30be8153f3cc23653c65ab6230d1c2db901b067b899da34fb173b0b7567f3aed"} Dec 09 17:18:37 crc kubenswrapper[4853]: I1209 17:18:37.962288 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" event={"ID":"16fadee3-bd2f-4f4e-af68-b938ecbbf327","Type":"ContainerStarted","Data":"7df3f75ac4e1759f6a59bcb781809b97fd54190746a1b8975e51a6923843db31"} Dec 09 17:18:39 crc kubenswrapper[4853]: I1209 17:18:39.960014 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c98v5"] Dec 09 17:18:39 crc kubenswrapper[4853]: I1209 17:18:39.981739 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-v7pf2"] Dec 09 17:18:39 crc kubenswrapper[4853]: I1209 17:18:39.985818 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:39.998398 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-v7pf2"] Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.148669 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-config\") pod \"dnsmasq-dns-666b6646f7-v7pf2\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.148757 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlpst\" (UniqueName: \"kubernetes.io/projected/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-kube-api-access-xlpst\") pod \"dnsmasq-dns-666b6646f7-v7pf2\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.148799 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-v7pf2\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.250484 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-config\") pod \"dnsmasq-dns-666b6646f7-v7pf2\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.250583 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlpst\" (UniqueName: \"kubernetes.io/projected/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-kube-api-access-xlpst\") pod \"dnsmasq-dns-666b6646f7-v7pf2\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.250637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-v7pf2\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.251755 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-v7pf2\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.252419 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-config\") pod \"dnsmasq-dns-666b6646f7-v7pf2\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.284399 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlpst\" (UniqueName: \"kubernetes.io/projected/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-kube-api-access-xlpst\") pod \"dnsmasq-dns-666b6646f7-v7pf2\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.296935 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7s7qf"] Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.323331 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.360199 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cb677"] Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.373040 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.383027 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cb677"] Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.476965 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cb677\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.479170 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrh4w\" (UniqueName: \"kubernetes.io/projected/96c29b56-c7bd-46aa-b130-e34425353476-kube-api-access-zrh4w\") pod \"dnsmasq-dns-57d769cc4f-cb677\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.479399 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-config\") pod \"dnsmasq-dns-57d769cc4f-cb677\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.580940 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-config\") pod \"dnsmasq-dns-57d769cc4f-cb677\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.581021 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cb677\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.581072 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrh4w\" (UniqueName: \"kubernetes.io/projected/96c29b56-c7bd-46aa-b130-e34425353476-kube-api-access-zrh4w\") pod \"dnsmasq-dns-57d769cc4f-cb677\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.582343 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-config\") pod \"dnsmasq-dns-57d769cc4f-cb677\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.583073 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cb677\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.600395 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrh4w\" (UniqueName: \"kubernetes.io/projected/96c29b56-c7bd-46aa-b130-e34425353476-kube-api-access-zrh4w\") pod \"dnsmasq-dns-57d769cc4f-cb677\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:40 crc kubenswrapper[4853]: I1209 17:18:40.792141 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.161269 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.164047 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.168585 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.168647 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.168768 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.168871 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.168922 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.168883 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.171848 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5m645" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.174007 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.186387 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-v7pf2"] Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201174 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96900f2e-a2ad-47fe-be9b-7b6a924ded82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201216 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwwtk\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-kube-api-access-fwwtk\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201266 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-config-data\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201318 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201344 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201359 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201389 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201410 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96900f2e-a2ad-47fe-be9b-7b6a924ded82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201443 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.201462 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.303317 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.303363 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96900f2e-a2ad-47fe-be9b-7b6a924ded82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.303387 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwwtk\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-kube-api-access-fwwtk\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.303430 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.303448 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-config-data\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.303479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.303506 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.303520 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.304050 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.304081 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96900f2e-a2ad-47fe-be9b-7b6a924ded82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.304115 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.305706 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.306148 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.306500 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-config-data\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.308933 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.311731 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.311927 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.314918 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96900f2e-a2ad-47fe-be9b-7b6a924ded82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.315683 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.315847 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.322365 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96900f2e-a2ad-47fe-be9b-7b6a924ded82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.328541 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwwtk\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-kube-api-access-fwwtk\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.338923 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cb677"] Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.353701 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.485693 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.488328 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.493848 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.498402 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.498426 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.498773 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.498992 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.499176 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.499315 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.500004 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h9wq5" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.504083 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.613865 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.614684 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.614810 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.614913 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.615925 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.616030 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03a2cb4e-7efc-4040-a115-db55575800e5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.616168 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.616308 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.616406 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03a2cb4e-7efc-4040-a115-db55575800e5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.616536 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.616659 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qx2\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-kube-api-access-h8qx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718562 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718632 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8qx2\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-kube-api-access-h8qx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718682 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718702 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718736 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718762 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718801 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718824 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03a2cb4e-7efc-4040-a115-db55575800e5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718851 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718873 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.718898 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03a2cb4e-7efc-4040-a115-db55575800e5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.720078 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.720395 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.720935 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.721179 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.721481 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.721734 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.727503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.728056 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03a2cb4e-7efc-4040-a115-db55575800e5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.729099 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03a2cb4e-7efc-4040-a115-db55575800e5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.738878 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.746459 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8qx2\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-kube-api-access-h8qx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.750284 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:41 crc kubenswrapper[4853]: I1209 17:18:41.821571 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.023768 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" event={"ID":"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f","Type":"ContainerStarted","Data":"e21af3bdb612fb3fc6001afc8ce5d073f45c23033a264da4eaad26c9a7b12827"} Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.732232 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.733879 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.737055 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.737703 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.737876 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-krq7m" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.739585 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.747160 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.751588 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.839907 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8797q\" (UniqueName: \"kubernetes.io/projected/dc8bc986-e8a8-467c-8e2a-795c26a74de7-kube-api-access-8797q\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.840661 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8bc986-e8a8-467c-8e2a-795c26a74de7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.840703 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.840814 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc8bc986-e8a8-467c-8e2a-795c26a74de7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.840840 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dc8bc986-e8a8-467c-8e2a-795c26a74de7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.841770 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dc8bc986-e8a8-467c-8e2a-795c26a74de7-config-data-default\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.841824 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc8bc986-e8a8-467c-8e2a-795c26a74de7-kolla-config\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.841850 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8bc986-e8a8-467c-8e2a-795c26a74de7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.943526 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc8bc986-e8a8-467c-8e2a-795c26a74de7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.943584 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dc8bc986-e8a8-467c-8e2a-795c26a74de7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.943687 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dc8bc986-e8a8-467c-8e2a-795c26a74de7-config-data-default\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.943707 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc8bc986-e8a8-467c-8e2a-795c26a74de7-kolla-config\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.943725 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8bc986-e8a8-467c-8e2a-795c26a74de7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.943758 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8797q\" (UniqueName: \"kubernetes.io/projected/dc8bc986-e8a8-467c-8e2a-795c26a74de7-kube-api-access-8797q\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.943773 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8bc986-e8a8-467c-8e2a-795c26a74de7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.943796 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.943988 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.945054 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc8bc986-e8a8-467c-8e2a-795c26a74de7-kolla-config\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.948040 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dc8bc986-e8a8-467c-8e2a-795c26a74de7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.948795 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dc8bc986-e8a8-467c-8e2a-795c26a74de7-config-data-default\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.949761 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc8bc986-e8a8-467c-8e2a-795c26a74de7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.950314 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8bc986-e8a8-467c-8e2a-795c26a74de7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.957764 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8bc986-e8a8-467c-8e2a-795c26a74de7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.969298 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8797q\" (UniqueName: \"kubernetes.io/projected/dc8bc986-e8a8-467c-8e2a-795c26a74de7-kube-api-access-8797q\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:42 crc kubenswrapper[4853]: I1209 17:18:42.985227 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"dc8bc986-e8a8-467c-8e2a-795c26a74de7\") " pod="openstack/openstack-galera-0" Dec 09 17:18:43 crc kubenswrapper[4853]: I1209 17:18:43.081459 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.032719 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.034566 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.037102 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.037255 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dq62s" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.038573 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.039270 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.044575 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.167306 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbc81f-b48d-4790-a213-10daf9f83287-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.167355 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85xw5\" (UniqueName: \"kubernetes.io/projected/5cfbc81f-b48d-4790-a213-10daf9f83287-kube-api-access-85xw5\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.167411 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfbc81f-b48d-4790-a213-10daf9f83287-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.167489 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.167527 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cfbc81f-b48d-4790-a213-10daf9f83287-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.167550 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc81f-b48d-4790-a213-10daf9f83287-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.167574 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cfbc81f-b48d-4790-a213-10daf9f83287-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.167691 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cfbc81f-b48d-4790-a213-10daf9f83287-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.270437 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbc81f-b48d-4790-a213-10daf9f83287-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.270516 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85xw5\" (UniqueName: \"kubernetes.io/projected/5cfbc81f-b48d-4790-a213-10daf9f83287-kube-api-access-85xw5\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.270588 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfbc81f-b48d-4790-a213-10daf9f83287-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.270709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.270762 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cfbc81f-b48d-4790-a213-10daf9f83287-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.270795 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc81f-b48d-4790-a213-10daf9f83287-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.270823 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cfbc81f-b48d-4790-a213-10daf9f83287-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.270855 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cfbc81f-b48d-4790-a213-10daf9f83287-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.271995 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cfbc81f-b48d-4790-a213-10daf9f83287-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.272875 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.274064 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc81f-b48d-4790-a213-10daf9f83287-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.276654 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cfbc81f-b48d-4790-a213-10daf9f83287-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.279336 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cfbc81f-b48d-4790-a213-10daf9f83287-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.282126 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cfbc81f-b48d-4790-a213-10daf9f83287-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.306194 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85xw5\" (UniqueName: \"kubernetes.io/projected/5cfbc81f-b48d-4790-a213-10daf9f83287-kube-api-access-85xw5\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.306707 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbc81f-b48d-4790-a213-10daf9f83287-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.361293 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5cfbc81f-b48d-4790-a213-10daf9f83287\") " pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.483500 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.485202 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.519633 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.556839 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.557058 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.559333 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7gwc6" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.575141 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2dpz\" (UniqueName: \"kubernetes.io/projected/8c1f0f91-fd80-4a19-9561-119c381afc9c-kube-api-access-t2dpz\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.575193 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c1f0f91-fd80-4a19-9561-119c381afc9c-kolla-config\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.575220 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c1f0f91-fd80-4a19-9561-119c381afc9c-config-data\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.575279 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1f0f91-fd80-4a19-9561-119c381afc9c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.575343 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1f0f91-fd80-4a19-9561-119c381afc9c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.655204 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.676688 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c1f0f91-fd80-4a19-9561-119c381afc9c-config-data\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.676787 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1f0f91-fd80-4a19-9561-119c381afc9c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.676866 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1f0f91-fd80-4a19-9561-119c381afc9c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.676916 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2dpz\" (UniqueName: \"kubernetes.io/projected/8c1f0f91-fd80-4a19-9561-119c381afc9c-kube-api-access-t2dpz\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.676939 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c1f0f91-fd80-4a19-9561-119c381afc9c-kolla-config\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.677660 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c1f0f91-fd80-4a19-9561-119c381afc9c-kolla-config\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.678131 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c1f0f91-fd80-4a19-9561-119c381afc9c-config-data\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.686088 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c1f0f91-fd80-4a19-9561-119c381afc9c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.696194 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c1f0f91-fd80-4a19-9561-119c381afc9c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.710320 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2dpz\" (UniqueName: \"kubernetes.io/projected/8c1f0f91-fd80-4a19-9561-119c381afc9c-kube-api-access-t2dpz\") pod \"memcached-0\" (UID: \"8c1f0f91-fd80-4a19-9561-119c381afc9c\") " pod="openstack/memcached-0" Dec 09 17:18:44 crc kubenswrapper[4853]: I1209 17:18:44.838667 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 17:18:45 crc kubenswrapper[4853]: I1209 17:18:45.065269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" event={"ID":"96c29b56-c7bd-46aa-b130-e34425353476","Type":"ContainerStarted","Data":"18a3dec7d0d5452b063e7b4f082b6e6f8effc4617ec9356b10807e6bb837d2a0"} Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.119263 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.120501 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.122806 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g8lhf" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.136729 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.214383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c942\" (UniqueName: \"kubernetes.io/projected/97a2bf6d-2b94-43ab-92e9-7a2355ae7df5-kube-api-access-6c942\") pod \"kube-state-metrics-0\" (UID: \"97a2bf6d-2b94-43ab-92e9-7a2355ae7df5\") " pod="openstack/kube-state-metrics-0" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.315840 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c942\" (UniqueName: \"kubernetes.io/projected/97a2bf6d-2b94-43ab-92e9-7a2355ae7df5-kube-api-access-6c942\") pod \"kube-state-metrics-0\" (UID: \"97a2bf6d-2b94-43ab-92e9-7a2355ae7df5\") " pod="openstack/kube-state-metrics-0" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.347033 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c942\" (UniqueName: \"kubernetes.io/projected/97a2bf6d-2b94-43ab-92e9-7a2355ae7df5-kube-api-access-6c942\") pod \"kube-state-metrics-0\" (UID: \"97a2bf6d-2b94-43ab-92e9-7a2355ae7df5\") " pod="openstack/kube-state-metrics-0" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.440840 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.830498 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2"] Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.832089 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.847177 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.847403 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-lh9g7" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.853317 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2"] Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.927333 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29e5a87-e074-4460-8fcf-d8b519b2c746-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-nhns2\" (UID: \"d29e5a87-e074-4460-8fcf-d8b519b2c746\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" Dec 09 17:18:46 crc kubenswrapper[4853]: I1209 17:18:46.927735 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crvnl\" (UniqueName: \"kubernetes.io/projected/d29e5a87-e074-4460-8fcf-d8b519b2c746-kube-api-access-crvnl\") pod \"observability-ui-dashboards-7d5fb4cbfb-nhns2\" (UID: \"d29e5a87-e074-4460-8fcf-d8b519b2c746\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.029731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29e5a87-e074-4460-8fcf-d8b519b2c746-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-nhns2\" (UID: \"d29e5a87-e074-4460-8fcf-d8b519b2c746\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.029782 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crvnl\" (UniqueName: \"kubernetes.io/projected/d29e5a87-e074-4460-8fcf-d8b519b2c746-kube-api-access-crvnl\") pod \"observability-ui-dashboards-7d5fb4cbfb-nhns2\" (UID: \"d29e5a87-e074-4460-8fcf-d8b519b2c746\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" Dec 09 17:18:47 crc kubenswrapper[4853]: E1209 17:18:47.029988 4853 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Dec 09 17:18:47 crc kubenswrapper[4853]: E1209 17:18:47.030073 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d29e5a87-e074-4460-8fcf-d8b519b2c746-serving-cert podName:d29e5a87-e074-4460-8fcf-d8b519b2c746 nodeName:}" failed. No retries permitted until 2025-12-09 17:18:47.530052631 +0000 UTC m=+1354.464791813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d29e5a87-e074-4460-8fcf-d8b519b2c746-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-nhns2" (UID: "d29e5a87-e074-4460-8fcf-d8b519b2c746") : secret "observability-ui-dashboards" not found Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.055581 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crvnl\" (UniqueName: \"kubernetes.io/projected/d29e5a87-e074-4460-8fcf-d8b519b2c746-kube-api-access-crvnl\") pod \"observability-ui-dashboards-7d5fb4cbfb-nhns2\" (UID: \"d29e5a87-e074-4460-8fcf-d8b519b2c746\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.210313 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68cb544846-9ln7v"] Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.212588 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.231354 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68cb544846-9ln7v"] Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.335345 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-trusted-ca-bundle\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.335425 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea81d355-d587-4a4b-a746-92c005875727-console-serving-cert\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.335471 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-service-ca\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.335523 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-console-config\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.335576 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea81d355-d587-4a4b-a746-92c005875727-console-oauth-config\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.335640 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-oauth-serving-cert\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.335797 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76qmm\" (UniqueName: \"kubernetes.io/projected/ea81d355-d587-4a4b-a746-92c005875727-kube-api-access-76qmm\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.437751 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-trusted-ca-bundle\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.438123 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea81d355-d587-4a4b-a746-92c005875727-console-serving-cert\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.438157 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-service-ca\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.438224 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-console-config\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.438300 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-oauth-serving-cert\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.438327 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea81d355-d587-4a4b-a746-92c005875727-console-oauth-config\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.438433 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76qmm\" (UniqueName: \"kubernetes.io/projected/ea81d355-d587-4a4b-a746-92c005875727-kube-api-access-76qmm\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.439523 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-trusted-ca-bundle\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.440257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-oauth-serving-cert\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.440353 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-console-config\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.440378 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea81d355-d587-4a4b-a746-92c005875727-service-ca\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.445169 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea81d355-d587-4a4b-a746-92c005875727-console-oauth-config\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.454345 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea81d355-d587-4a4b-a746-92c005875727-console-serving-cert\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.456017 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.458734 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.464359 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76qmm\" (UniqueName: \"kubernetes.io/projected/ea81d355-d587-4a4b-a746-92c005875727-kube-api-access-76qmm\") pod \"console-68cb544846-9ln7v\" (UID: \"ea81d355-d587-4a4b-a746-92c005875727\") " pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.469088 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-g6cfs" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.469554 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.469688 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.470791 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.478207 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.479922 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.491479 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.535050 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.544716 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.545142 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.554523 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29e5a87-e074-4460-8fcf-d8b519b2c746-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-nhns2\" (UID: \"d29e5a87-e074-4460-8fcf-d8b519b2c746\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.554846 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39b91583-7835-4bb9-ad7f-32fae11f2b77-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.555142 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-config\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.555310 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.555532 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kzbw\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-kube-api-access-2kzbw\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.556091 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39b91583-7835-4bb9-ad7f-32fae11f2b77-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.556305 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.562167 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29e5a87-e074-4460-8fcf-d8b519b2c746-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-nhns2\" (UID: \"d29e5a87-e074-4460-8fcf-d8b519b2c746\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.659037 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39b91583-7835-4bb9-ad7f-32fae11f2b77-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.659151 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-config\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.659233 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.660101 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kzbw\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-kube-api-access-2kzbw\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.660163 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39b91583-7835-4bb9-ad7f-32fae11f2b77-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.660260 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.660348 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.660394 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.661482 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39b91583-7835-4bb9-ad7f-32fae11f2b77-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.661619 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.663269 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39b91583-7835-4bb9-ad7f-32fae11f2b77-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.664221 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.664514 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.667084 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.677378 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-config\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.685164 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kzbw\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-kube-api-access-2kzbw\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.691198 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"prometheus-metric-storage-0\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.770306 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" Dec 09 17:18:47 crc kubenswrapper[4853]: I1209 17:18:47.854755 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.064959 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k9tc2"] Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.066862 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.072821 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-862bh" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.073521 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.073836 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.084824 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9tc2"] Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.211931 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-twpcm"] Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.249039 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9047f51-9852-47e3-bc10-649c8d638054-scripts\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.250909 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbwc8\" (UniqueName: \"kubernetes.io/projected/e9047f51-9852-47e3-bc10-649c8d638054-kube-api-access-tbwc8\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.251100 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9047f51-9852-47e3-bc10-649c8d638054-combined-ca-bundle\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.251169 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9047f51-9852-47e3-bc10-649c8d638054-var-log-ovn\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.251284 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9047f51-9852-47e3-bc10-649c8d638054-ovn-controller-tls-certs\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.252072 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9047f51-9852-47e3-bc10-649c8d638054-var-run-ovn\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.252102 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9047f51-9852-47e3-bc10-649c8d638054-var-run\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.257949 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.295422 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-twpcm"] Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354286 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-scripts\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354436 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9047f51-9852-47e3-bc10-649c8d638054-scripts\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354469 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbwc8\" (UniqueName: \"kubernetes.io/projected/e9047f51-9852-47e3-bc10-649c8d638054-kube-api-access-tbwc8\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354508 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-var-run\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354547 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-var-log\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354635 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284vd\" (UniqueName: \"kubernetes.io/projected/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-kube-api-access-284vd\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354717 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9047f51-9852-47e3-bc10-649c8d638054-combined-ca-bundle\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354761 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9047f51-9852-47e3-bc10-649c8d638054-var-log-ovn\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354791 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9047f51-9852-47e3-bc10-649c8d638054-ovn-controller-tls-certs\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354817 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-etc-ovs\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354836 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-var-lib\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.354865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9047f51-9852-47e3-bc10-649c8d638054-var-run-ovn\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.356361 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9047f51-9852-47e3-bc10-649c8d638054-var-run-ovn\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.356523 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9047f51-9852-47e3-bc10-649c8d638054-var-log-ovn\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.356856 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9047f51-9852-47e3-bc10-649c8d638054-var-run\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.356973 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9047f51-9852-47e3-bc10-649c8d638054-var-run\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.359213 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9047f51-9852-47e3-bc10-649c8d638054-scripts\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.362064 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9047f51-9852-47e3-bc10-649c8d638054-combined-ca-bundle\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.364490 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9047f51-9852-47e3-bc10-649c8d638054-ovn-controller-tls-certs\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.391924 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbwc8\" (UniqueName: \"kubernetes.io/projected/e9047f51-9852-47e3-bc10-649c8d638054-kube-api-access-tbwc8\") pod \"ovn-controller-k9tc2\" (UID: \"e9047f51-9852-47e3-bc10-649c8d638054\") " pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.455396 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9tc2" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.461405 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-var-run\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.461494 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-var-log\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.461527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-284vd\" (UniqueName: \"kubernetes.io/projected/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-kube-api-access-284vd\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.461641 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-etc-ovs\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.461665 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-var-lib\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.461734 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-scripts\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.462087 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-var-run\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.462211 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-etc-ovs\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.462315 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-var-lib\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.462489 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-var-log\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.464444 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-scripts\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.482015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-284vd\" (UniqueName: \"kubernetes.io/projected/b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568-kube-api-access-284vd\") pod \"ovn-controller-ovs-twpcm\" (UID: \"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568\") " pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.596983 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.940583 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.942631 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.951153 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nb9nq" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.951744 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.951922 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.952068 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.952186 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 09 17:18:50 crc kubenswrapper[4853]: I1209 17:18:50.955011 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.082925 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.082970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.083002 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.083072 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.083136 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.083164 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-config\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.083197 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgcl5\" (UniqueName: \"kubernetes.io/projected/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-kube-api-access-zgcl5\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.083225 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.185147 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.185204 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.185233 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.185297 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.185349 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.185367 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-config\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.185386 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgcl5\" (UniqueName: \"kubernetes.io/projected/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-kube-api-access-zgcl5\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.185405 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.186586 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.186672 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.188451 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-config\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.188647 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.191841 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.192062 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.199849 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.205499 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgcl5\" (UniqueName: \"kubernetes.io/projected/4084ccc4-89b3-4be7-a0aa-83f3619a0cb1-kube-api-access-zgcl5\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.217266 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1\") " pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:51 crc kubenswrapper[4853]: I1209 17:18:51.273507 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.037039 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.044907 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.050370 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.051663 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.051937 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.052853 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jcbpx" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.060583 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.192833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0106126-8691-4275-82e8-a74d76c6482c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.193043 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8qg\" (UniqueName: \"kubernetes.io/projected/a0106126-8691-4275-82e8-a74d76c6482c-kube-api-access-vx8qg\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.193163 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0106126-8691-4275-82e8-a74d76c6482c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.193214 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.193277 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0106126-8691-4275-82e8-a74d76c6482c-config\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.193442 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0106126-8691-4275-82e8-a74d76c6482c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.193550 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0106126-8691-4275-82e8-a74d76c6482c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.193901 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0106126-8691-4275-82e8-a74d76c6482c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.296345 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0106126-8691-4275-82e8-a74d76c6482c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.296440 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.296469 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0106126-8691-4275-82e8-a74d76c6482c-config\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.296621 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0106126-8691-4275-82e8-a74d76c6482c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.296663 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0106126-8691-4275-82e8-a74d76c6482c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.296815 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0106126-8691-4275-82e8-a74d76c6482c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.296903 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0106126-8691-4275-82e8-a74d76c6482c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.297018 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.297027 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx8qg\" (UniqueName: \"kubernetes.io/projected/a0106126-8691-4275-82e8-a74d76c6482c-kube-api-access-vx8qg\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.297546 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0106126-8691-4275-82e8-a74d76c6482c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.298444 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0106126-8691-4275-82e8-a74d76c6482c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.298710 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0106126-8691-4275-82e8-a74d76c6482c-config\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.303967 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0106126-8691-4275-82e8-a74d76c6482c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.304561 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0106126-8691-4275-82e8-a74d76c6482c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.304696 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0106126-8691-4275-82e8-a74d76c6482c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.315881 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx8qg\" (UniqueName: \"kubernetes.io/projected/a0106126-8691-4275-82e8-a74d76c6482c-kube-api-access-vx8qg\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.322284 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a0106126-8691-4275-82e8-a74d76c6482c\") " pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:54 crc kubenswrapper[4853]: I1209 17:18:54.375840 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 17:18:55 crc kubenswrapper[4853]: E1209 17:18:55.788872 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 17:18:55 crc kubenswrapper[4853]: E1209 17:18:55.789470 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb9xb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-c98v5_openstack(16fadee3-bd2f-4f4e-af68-b938ecbbf327): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:18:55 crc kubenswrapper[4853]: E1209 17:18:55.791812 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" podUID="16fadee3-bd2f-4f4e-af68-b938ecbbf327" Dec 09 17:18:55 crc kubenswrapper[4853]: E1209 17:18:55.797726 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 17:18:55 crc kubenswrapper[4853]: E1209 17:18:55.797938 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n74pq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7s7qf_openstack(cdd077b2-a7e5-4421-baed-08f9ba7b17c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:18:55 crc kubenswrapper[4853]: E1209 17:18:55.800069 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" podUID="cdd077b2-a7e5-4421-baed-08f9ba7b17c8" Dec 09 17:18:56 crc kubenswrapper[4853]: I1209 17:18:56.359585 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 17:18:56 crc kubenswrapper[4853]: I1209 17:18:56.840321 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 17:18:56 crc kubenswrapper[4853]: I1209 17:18:56.851041 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.223685 4853 generic.go:334] "Generic (PLEG): container finished" podID="68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" containerID="3e3ca36931c6df587306622402f9b2f757ee55ac3f63e55574d067bcd542d863" exitCode=0 Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.223781 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" event={"ID":"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f","Type":"ContainerDied","Data":"3e3ca36931c6df587306622402f9b2f757ee55ac3f63e55574d067bcd542d863"} Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.227111 4853 generic.go:334] "Generic (PLEG): container finished" podID="96c29b56-c7bd-46aa-b130-e34425353476" containerID="a05885a367d6608871d8f12c4993d38ace1ff72d68a6b6251d45f7d0989bdd8b" exitCode=0 Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.227168 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" event={"ID":"96c29b56-c7bd-46aa-b130-e34425353476","Type":"ContainerDied","Data":"a05885a367d6608871d8f12c4993d38ace1ff72d68a6b6251d45f7d0989bdd8b"} Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.230347 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dc8bc986-e8a8-467c-8e2a-795c26a74de7","Type":"ContainerStarted","Data":"a046a2952126f12c3dc83213bdbbbfb63987cb611376905a4216e19f26388e96"} Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.232830 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96900f2e-a2ad-47fe-be9b-7b6a924ded82","Type":"ContainerStarted","Data":"95f63897c683b3a56d95f3c1a0a1d25e48f29708bfffe7b6e3e27750a4b23f65"} Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.253468 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03a2cb4e-7efc-4040-a115-db55575800e5","Type":"ContainerStarted","Data":"86c345071e94f19007dd74127d42f586e8093f4d1d6570dad7c1ccbd053b0124"} Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.404535 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.414651 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:57 crc kubenswrapper[4853]: W1209 17:18:57.416435 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd29e5a87_e074_4460_8fcf_d8b519b2c746.slice/crio-50cb42485d8f00324b3240777a2a0745aa7ddc406424708e2b70a6ea5e1bc063 WatchSource:0}: Error finding container 50cb42485d8f00324b3240777a2a0745aa7ddc406424708e2b70a6ea5e1bc063: Status 404 returned error can't find the container with id 50cb42485d8f00324b3240777a2a0745aa7ddc406424708e2b70a6ea5e1bc063 Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.420698 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 17:18:57 crc kubenswrapper[4853]: W1209 17:18:57.448942 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c1f0f91_fd80_4a19_9561_119c381afc9c.slice/crio-6d5525850110220668968b8b94f88bb13b102d43b100aaddf4a10a106c7a9447 WatchSource:0}: Error finding container 6d5525850110220668968b8b94f88bb13b102d43b100aaddf4a10a106c7a9447: Status 404 returned error can't find the container with id 6d5525850110220668968b8b94f88bb13b102d43b100aaddf4a10a106c7a9447 Dec 09 17:18:57 crc kubenswrapper[4853]: W1209 17:18:57.450412 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cfbc81f_b48d_4790_a213_10daf9f83287.slice/crio-9ff3c95648bd87858e903a073c7c256ea36edff3692de0dafac6f9130664e40d WatchSource:0}: Error finding container 9ff3c95648bd87858e903a073c7c256ea36edff3692de0dafac6f9130664e40d: Status 404 returned error can't find the container with id 9ff3c95648bd87858e903a073c7c256ea36edff3692de0dafac6f9130664e40d Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.458827 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.482026 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2"] Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.495337 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.502891 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.582624 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fadee3-bd2f-4f4e-af68-b938ecbbf327-config\") pod \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\" (UID: \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\") " Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.582758 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-config\") pod \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.582918 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-dns-svc\") pod \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.582983 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n74pq\" (UniqueName: \"kubernetes.io/projected/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-kube-api-access-n74pq\") pod \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\" (UID: \"cdd077b2-a7e5-4421-baed-08f9ba7b17c8\") " Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.583069 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb9xb\" (UniqueName: \"kubernetes.io/projected/16fadee3-bd2f-4f4e-af68-b938ecbbf327-kube-api-access-gb9xb\") pod \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\" (UID: \"16fadee3-bd2f-4f4e-af68-b938ecbbf327\") " Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.583237 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16fadee3-bd2f-4f4e-af68-b938ecbbf327-config" (OuterVolumeSpecName: "config") pod "16fadee3-bd2f-4f4e-af68-b938ecbbf327" (UID: "16fadee3-bd2f-4f4e-af68-b938ecbbf327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.583390 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-config" (OuterVolumeSpecName: "config") pod "cdd077b2-a7e5-4421-baed-08f9ba7b17c8" (UID: "cdd077b2-a7e5-4421-baed-08f9ba7b17c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.583514 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdd077b2-a7e5-4421-baed-08f9ba7b17c8" (UID: "cdd077b2-a7e5-4421-baed-08f9ba7b17c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.584460 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fadee3-bd2f-4f4e-af68-b938ecbbf327-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.584477 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.584490 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.587281 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-kube-api-access-n74pq" (OuterVolumeSpecName: "kube-api-access-n74pq") pod "cdd077b2-a7e5-4421-baed-08f9ba7b17c8" (UID: "cdd077b2-a7e5-4421-baed-08f9ba7b17c8"). InnerVolumeSpecName "kube-api-access-n74pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.591403 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16fadee3-bd2f-4f4e-af68-b938ecbbf327-kube-api-access-gb9xb" (OuterVolumeSpecName: "kube-api-access-gb9xb") pod "16fadee3-bd2f-4f4e-af68-b938ecbbf327" (UID: "16fadee3-bd2f-4f4e-af68-b938ecbbf327"). InnerVolumeSpecName "kube-api-access-gb9xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.686668 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n74pq\" (UniqueName: \"kubernetes.io/projected/cdd077b2-a7e5-4421-baed-08f9ba7b17c8-kube-api-access-n74pq\") on node \"crc\" DevicePath \"\"" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.686957 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb9xb\" (UniqueName: \"kubernetes.io/projected/16fadee3-bd2f-4f4e-af68-b938ecbbf327-kube-api-access-gb9xb\") on node \"crc\" DevicePath \"\"" Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.815556 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68cb544846-9ln7v"] Dec 09 17:18:57 crc kubenswrapper[4853]: W1209 17:18:57.824161 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea81d355_d587_4a4b_a746_92c005875727.slice/crio-8ad04f25212b45d1ca7ca6b45c8bb3990689a907ca506e554c0993d4aae47860 WatchSource:0}: Error finding container 8ad04f25212b45d1ca7ca6b45c8bb3990689a907ca506e554c0993d4aae47860: Status 404 returned error can't find the container with id 8ad04f25212b45d1ca7ca6b45c8bb3990689a907ca506e554c0993d4aae47860 Dec 09 17:18:57 crc kubenswrapper[4853]: I1209 17:18:57.826941 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9tc2"] Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.147311 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-twpcm"] Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.268428 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" event={"ID":"cdd077b2-a7e5-4421-baed-08f9ba7b17c8","Type":"ContainerDied","Data":"30be8153f3cc23653c65ab6230d1c2db901b067b899da34fb173b0b7567f3aed"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.268527 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7s7qf" Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.271897 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a2bf6d-2b94-43ab-92e9-7a2355ae7df5","Type":"ContainerStarted","Data":"eb53d0c671b4cf0081fa46f1fd4a5740c5358fb06240fb4281e3e3f869fb768d"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.276778 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" event={"ID":"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f","Type":"ContainerStarted","Data":"0ec2681324ff689ab5bb7881b27a68c39ed363f8ab572cf89f63a9821f6c036a"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.276902 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.279769 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" event={"ID":"d29e5a87-e074-4460-8fcf-d8b519b2c746","Type":"ContainerStarted","Data":"50cb42485d8f00324b3240777a2a0745aa7ddc406424708e2b70a6ea5e1bc063"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.281014 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cfbc81f-b48d-4790-a213-10daf9f83287","Type":"ContainerStarted","Data":"9ff3c95648bd87858e903a073c7c256ea36edff3692de0dafac6f9130664e40d"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.282970 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" event={"ID":"16fadee3-bd2f-4f4e-af68-b938ecbbf327","Type":"ContainerDied","Data":"7df3f75ac4e1759f6a59bcb781809b97fd54190746a1b8975e51a6923843db31"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.283006 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c98v5" Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.284425 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-twpcm" event={"ID":"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568","Type":"ContainerStarted","Data":"0d04b2ab260d915220f81600b60d1666d4563378065babc1708013b790a27b75"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.286783 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerStarted","Data":"c32996bf4c134e9435a41247c1b7ac7d9dcba107c194213179d246718fe24070"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.291152 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" event={"ID":"96c29b56-c7bd-46aa-b130-e34425353476","Type":"ContainerStarted","Data":"dd9d122b0dfaeccaa054c7d52371f6bf1e327cbd7c389c47cac79978ba5a65e0"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.291285 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.294867 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8c1f0f91-fd80-4a19-9561-119c381afc9c","Type":"ContainerStarted","Data":"6d5525850110220668968b8b94f88bb13b102d43b100aaddf4a10a106c7a9447"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.304311 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9tc2" event={"ID":"e9047f51-9852-47e3-bc10-649c8d638054","Type":"ContainerStarted","Data":"4e193fc71729aab4f15c97a2752efbb04712b367ab504aac312122ed7d4786ca"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.306911 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cb544846-9ln7v" event={"ID":"ea81d355-d587-4a4b-a746-92c005875727","Type":"ContainerStarted","Data":"f9b1540c28972b0684ffe93167909bf5ef38ef5d13f350017261bfc1cafe96ce"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.306950 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cb544846-9ln7v" event={"ID":"ea81d355-d587-4a4b-a746-92c005875727","Type":"ContainerStarted","Data":"8ad04f25212b45d1ca7ca6b45c8bb3990689a907ca506e554c0993d4aae47860"} Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.317942 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7s7qf"] Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.325753 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7s7qf"] Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.333658 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" podStartSLOduration=6.522392021 podStartE2EDuration="18.333638913s" podCreationTimestamp="2025-12-09 17:18:40 +0000 UTC" firstStartedPulling="2025-12-09 17:18:44.328216889 +0000 UTC m=+1351.262956071" lastFinishedPulling="2025-12-09 17:18:56.139463781 +0000 UTC m=+1363.074202963" observedRunningTime="2025-12-09 17:18:58.316617697 +0000 UTC m=+1365.251356889" watchObservedRunningTime="2025-12-09 17:18:58.333638913 +0000 UTC m=+1365.268378095" Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.341467 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" podStartSLOduration=4.517365589 podStartE2EDuration="19.341450552s" podCreationTimestamp="2025-12-09 17:18:39 +0000 UTC" firstStartedPulling="2025-12-09 17:18:41.192660753 +0000 UTC m=+1348.127399935" lastFinishedPulling="2025-12-09 17:18:56.016745706 +0000 UTC m=+1362.951484898" observedRunningTime="2025-12-09 17:18:58.340165147 +0000 UTC m=+1365.274904329" watchObservedRunningTime="2025-12-09 17:18:58.341450552 +0000 UTC m=+1365.276189734" Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.404627 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c98v5"] Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.418970 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c98v5"] Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.420748 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68cb544846-9ln7v" podStartSLOduration=11.420732991 podStartE2EDuration="11.420732991s" podCreationTimestamp="2025-12-09 17:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:18:58.388248852 +0000 UTC m=+1365.322988034" watchObservedRunningTime="2025-12-09 17:18:58.420732991 +0000 UTC m=+1365.355472173" Dec 09 17:18:58 crc kubenswrapper[4853]: I1209 17:18:58.745763 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 17:18:58 crc kubenswrapper[4853]: W1209 17:18:58.992542 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4084ccc4_89b3_4be7_a0aa_83f3619a0cb1.slice/crio-9db4c3a52c5f55269db64cfb0ead9da4ae53eef00eba04b3f770d7c74a0b6432 WatchSource:0}: Error finding container 9db4c3a52c5f55269db64cfb0ead9da4ae53eef00eba04b3f770d7c74a0b6432: Status 404 returned error can't find the container with id 9db4c3a52c5f55269db64cfb0ead9da4ae53eef00eba04b3f770d7c74a0b6432 Dec 09 17:18:59 crc kubenswrapper[4853]: I1209 17:18:59.324813 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1","Type":"ContainerStarted","Data":"9db4c3a52c5f55269db64cfb0ead9da4ae53eef00eba04b3f770d7c74a0b6432"} Dec 09 17:18:59 crc kubenswrapper[4853]: I1209 17:18:59.583470 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fadee3-bd2f-4f4e-af68-b938ecbbf327" path="/var/lib/kubelet/pods/16fadee3-bd2f-4f4e-af68-b938ecbbf327/volumes" Dec 09 17:18:59 crc kubenswrapper[4853]: I1209 17:18:59.584200 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd077b2-a7e5-4421-baed-08f9ba7b17c8" path="/var/lib/kubelet/pods/cdd077b2-a7e5-4421-baed-08f9ba7b17c8/volumes" Dec 09 17:18:59 crc kubenswrapper[4853]: I1209 17:18:59.611474 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 17:19:00 crc kubenswrapper[4853]: W1209 17:19:00.316987 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0106126_8691_4275_82e8_a74d76c6482c.slice/crio-0ff947a958994500fd2646b3ab48676cbe3b52a71aea25017c111ec1ac3b5267 WatchSource:0}: Error finding container 0ff947a958994500fd2646b3ab48676cbe3b52a71aea25017c111ec1ac3b5267: Status 404 returned error can't find the container with id 0ff947a958994500fd2646b3ab48676cbe3b52a71aea25017c111ec1ac3b5267 Dec 09 17:19:00 crc kubenswrapper[4853]: I1209 17:19:00.333741 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a0106126-8691-4275-82e8-a74d76c6482c","Type":"ContainerStarted","Data":"0ff947a958994500fd2646b3ab48676cbe3b52a71aea25017c111ec1ac3b5267"} Dec 09 17:19:05 crc kubenswrapper[4853]: I1209 17:19:05.324867 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:19:05 crc kubenswrapper[4853]: I1209 17:19:05.794794 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:19:05 crc kubenswrapper[4853]: I1209 17:19:05.856827 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-v7pf2"] Dec 09 17:19:05 crc kubenswrapper[4853]: I1209 17:19:05.857091 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" podUID="68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" containerName="dnsmasq-dns" containerID="cri-o://0ec2681324ff689ab5bb7881b27a68c39ed363f8ab572cf89f63a9821f6c036a" gracePeriod=10 Dec 09 17:19:06 crc kubenswrapper[4853]: I1209 17:19:06.391765 4853 generic.go:334] "Generic (PLEG): container finished" podID="68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" containerID="0ec2681324ff689ab5bb7881b27a68c39ed363f8ab572cf89f63a9821f6c036a" exitCode=0 Dec 09 17:19:06 crc kubenswrapper[4853]: I1209 17:19:06.391813 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" event={"ID":"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f","Type":"ContainerDied","Data":"0ec2681324ff689ab5bb7881b27a68c39ed363f8ab572cf89f63a9821f6c036a"} Dec 09 17:19:07 crc kubenswrapper[4853]: I1209 17:19:07.536974 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:19:07 crc kubenswrapper[4853]: I1209 17:19:07.537255 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:19:07 crc kubenswrapper[4853]: I1209 17:19:07.546823 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:19:08 crc kubenswrapper[4853]: I1209 17:19:08.455901 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-68cb544846-9ln7v" Dec 09 17:19:08 crc kubenswrapper[4853]: I1209 17:19:08.536146 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-658f7d5d7b-lbkxw"] Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.461588 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" event={"ID":"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f","Type":"ContainerDied","Data":"e21af3bdb612fb3fc6001afc8ce5d073f45c23033a264da4eaad26c9a7b12827"} Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.462209 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e21af3bdb612fb3fc6001afc8ce5d073f45c23033a264da4eaad26c9a7b12827" Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.463271 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.568998 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlpst\" (UniqueName: \"kubernetes.io/projected/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-kube-api-access-xlpst\") pod \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.569087 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-dns-svc\") pod \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.569215 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-config\") pod \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\" (UID: \"68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f\") " Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.573489 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-kube-api-access-xlpst" (OuterVolumeSpecName: "kube-api-access-xlpst") pod "68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" (UID: "68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f"). InnerVolumeSpecName "kube-api-access-xlpst". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.616644 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-config" (OuterVolumeSpecName: "config") pod "68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" (UID: "68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.636381 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" (UID: "68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.672769 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.672810 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:09 crc kubenswrapper[4853]: I1209 17:19:09.672822 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlpst\" (UniqueName: \"kubernetes.io/projected/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f-kube-api-access-xlpst\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:10 crc kubenswrapper[4853]: E1209 17:19:10.380211 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 17:19:10 crc kubenswrapper[4853]: E1209 17:19:10.380695 4853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 17:19:10 crc kubenswrapper[4853]: E1209 17:19:10.380939 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6c942,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(97a2bf6d-2b94-43ab-92e9-7a2355ae7df5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 17:19:10 crc kubenswrapper[4853]: E1209 17:19:10.382250 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="97a2bf6d-2b94-43ab-92e9-7a2355ae7df5" Dec 09 17:19:10 crc kubenswrapper[4853]: I1209 17:19:10.475757 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-v7pf2" Dec 09 17:19:10 crc kubenswrapper[4853]: E1209 17:19:10.477958 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="97a2bf6d-2b94-43ab-92e9-7a2355ae7df5" Dec 09 17:19:10 crc kubenswrapper[4853]: I1209 17:19:10.628023 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-v7pf2"] Dec 09 17:19:10 crc kubenswrapper[4853]: I1209 17:19:10.641713 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-v7pf2"] Dec 09 17:19:11 crc kubenswrapper[4853]: I1209 17:19:11.486193 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8c1f0f91-fd80-4a19-9561-119c381afc9c","Type":"ContainerStarted","Data":"c3716d1915edeb3ab86b0e4587244cde717cd5cc745afa2adf072dcb76693929"} Dec 09 17:19:11 crc kubenswrapper[4853]: I1209 17:19:11.486621 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 09 17:19:11 crc kubenswrapper[4853]: I1209 17:19:11.487853 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dc8bc986-e8a8-467c-8e2a-795c26a74de7","Type":"ContainerStarted","Data":"2c79d319895ed4885bdf89d4bf0f8e690844ba4e3f6891d16ec723c394395381"} Dec 09 17:19:11 crc kubenswrapper[4853]: I1209 17:19:11.514242 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.737280493 podStartE2EDuration="27.514217614s" podCreationTimestamp="2025-12-09 17:18:44 +0000 UTC" firstStartedPulling="2025-12-09 17:18:57.452946208 +0000 UTC m=+1364.387685390" lastFinishedPulling="2025-12-09 17:19:09.229883329 +0000 UTC m=+1376.164622511" observedRunningTime="2025-12-09 17:19:11.508397491 +0000 UTC m=+1378.443136683" watchObservedRunningTime="2025-12-09 17:19:11.514217614 +0000 UTC m=+1378.448956806" Dec 09 17:19:11 crc kubenswrapper[4853]: I1209 17:19:11.580130 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" path="/var/lib/kubelet/pods/68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f/volumes" Dec 09 17:19:12 crc kubenswrapper[4853]: I1209 17:19:12.500037 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cfbc81f-b48d-4790-a213-10daf9f83287","Type":"ContainerStarted","Data":"bd791117fca3756df967baf5aef15cfd93128c8ce055dc203e4564f37222d8b6"} Dec 09 17:19:12 crc kubenswrapper[4853]: I1209 17:19:12.506705 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-twpcm" event={"ID":"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568","Type":"ContainerStarted","Data":"a3770c7880d77cd46992b44993be29d22a43ff3fd9c941d3264305579a304d71"} Dec 09 17:19:12 crc kubenswrapper[4853]: I1209 17:19:12.511175 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1","Type":"ContainerStarted","Data":"69aab9ccb2038eb217e9e6dd442b179175e2cdfa46f091e5c8fd7791a632eb91"} Dec 09 17:19:12 crc kubenswrapper[4853]: I1209 17:19:12.513510 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a0106126-8691-4275-82e8-a74d76c6482c","Type":"ContainerStarted","Data":"df837d6c538c20ee0a423613bc6bd9cbc39d598a897906dee94e4afe95aa0d48"} Dec 09 17:19:12 crc kubenswrapper[4853]: I1209 17:19:12.515195 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9tc2" event={"ID":"e9047f51-9852-47e3-bc10-649c8d638054","Type":"ContainerStarted","Data":"352605c6cab6f08429cc1dda77dae6622b83e3b117d9191fb69ca74892c3a10c"} Dec 09 17:19:12 crc kubenswrapper[4853]: I1209 17:19:12.515338 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-k9tc2" Dec 09 17:19:12 crc kubenswrapper[4853]: I1209 17:19:12.518579 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" event={"ID":"d29e5a87-e074-4460-8fcf-d8b519b2c746","Type":"ContainerStarted","Data":"1a0dfc20756145345b94e741beea16a1fd3b8d21c34ad8f0c41053b75eb5c8c6"} Dec 09 17:19:12 crc kubenswrapper[4853]: I1209 17:19:12.575883 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-k9tc2" podStartSLOduration=10.41469123 podStartE2EDuration="22.575862844s" podCreationTimestamp="2025-12-09 17:18:50 +0000 UTC" firstStartedPulling="2025-12-09 17:18:57.831858291 +0000 UTC m=+1364.766597473" lastFinishedPulling="2025-12-09 17:19:09.993029915 +0000 UTC m=+1376.927769087" observedRunningTime="2025-12-09 17:19:12.575077742 +0000 UTC m=+1379.509816934" watchObservedRunningTime="2025-12-09 17:19:12.575862844 +0000 UTC m=+1379.510602026" Dec 09 17:19:12 crc kubenswrapper[4853]: I1209 17:19:12.599748 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nhns2" podStartSLOduration=14.086921288 podStartE2EDuration="26.599719522s" podCreationTimestamp="2025-12-09 17:18:46 +0000 UTC" firstStartedPulling="2025-12-09 17:18:57.426188729 +0000 UTC m=+1364.360927911" lastFinishedPulling="2025-12-09 17:19:09.938986963 +0000 UTC m=+1376.873726145" observedRunningTime="2025-12-09 17:19:12.594572668 +0000 UTC m=+1379.529311870" watchObservedRunningTime="2025-12-09 17:19:12.599719522 +0000 UTC m=+1379.534458704" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.538244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03a2cb4e-7efc-4040-a115-db55575800e5","Type":"ContainerStarted","Data":"21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4"} Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.541870 4853 generic.go:334] "Generic (PLEG): container finished" podID="b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568" containerID="a3770c7880d77cd46992b44993be29d22a43ff3fd9c941d3264305579a304d71" exitCode=0 Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.541949 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-twpcm" event={"ID":"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568","Type":"ContainerDied","Data":"a3770c7880d77cd46992b44993be29d22a43ff3fd9c941d3264305579a304d71"} Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.544451 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerStarted","Data":"ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b"} Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.547649 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96900f2e-a2ad-47fe-be9b-7b6a924ded82","Type":"ContainerStarted","Data":"9d3feb5a12e69400f9270b312552b50129b059d8c865fffbce24185943e545b6"} Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.720088 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6bchm"] Dec 09 17:19:13 crc kubenswrapper[4853]: E1209 17:19:13.720873 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" containerName="dnsmasq-dns" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.720893 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" containerName="dnsmasq-dns" Dec 09 17:19:13 crc kubenswrapper[4853]: E1209 17:19:13.720910 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" containerName="init" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.720918 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" containerName="init" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.721136 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ab9a22-e8e9-4a31-a8d5-fe5055b7d59f" containerName="dnsmasq-dns" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.721923 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.728404 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6bchm"] Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.770142 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.859343 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b54962b-4869-4264-9a51-95ccdb7f3cbf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.859489 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b54962b-4869-4264-9a51-95ccdb7f3cbf-config\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.859698 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b54962b-4869-4264-9a51-95ccdb7f3cbf-combined-ca-bundle\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.859765 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0b54962b-4869-4264-9a51-95ccdb7f3cbf-ovs-rundir\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.859792 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0b54962b-4869-4264-9a51-95ccdb7f3cbf-ovn-rundir\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.859844 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhf9\" (UniqueName: \"kubernetes.io/projected/0b54962b-4869-4264-9a51-95ccdb7f3cbf-kube-api-access-8zhf9\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.903611 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8qllg"] Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.905746 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.907798 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.937810 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8qllg"] Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.961304 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b54962b-4869-4264-9a51-95ccdb7f3cbf-combined-ca-bundle\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.961394 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0b54962b-4869-4264-9a51-95ccdb7f3cbf-ovs-rundir\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.961420 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0b54962b-4869-4264-9a51-95ccdb7f3cbf-ovn-rundir\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.961482 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zhf9\" (UniqueName: \"kubernetes.io/projected/0b54962b-4869-4264-9a51-95ccdb7f3cbf-kube-api-access-8zhf9\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.961528 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b54962b-4869-4264-9a51-95ccdb7f3cbf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.961719 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b54962b-4869-4264-9a51-95ccdb7f3cbf-config\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.962631 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b54962b-4869-4264-9a51-95ccdb7f3cbf-config\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.963372 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0b54962b-4869-4264-9a51-95ccdb7f3cbf-ovs-rundir\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.962990 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0b54962b-4869-4264-9a51-95ccdb7f3cbf-ovn-rundir\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.968233 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b54962b-4869-4264-9a51-95ccdb7f3cbf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.986294 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zhf9\" (UniqueName: \"kubernetes.io/projected/0b54962b-4869-4264-9a51-95ccdb7f3cbf-kube-api-access-8zhf9\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:13 crc kubenswrapper[4853]: I1209 17:19:13.986355 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b54962b-4869-4264-9a51-95ccdb7f3cbf-combined-ca-bundle\") pod \"ovn-controller-metrics-6bchm\" (UID: \"0b54962b-4869-4264-9a51-95ccdb7f3cbf\") " pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.063542 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlxt5\" (UniqueName: \"kubernetes.io/projected/f93044f8-12ac-4b97-be72-fa998815fe43-kube-api-access-dlxt5\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.063733 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.063842 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.064171 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-config\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.124935 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bchm" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.142844 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8qllg"] Dec 09 17:19:14 crc kubenswrapper[4853]: E1209 17:19:14.143929 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-dlxt5 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" podUID="f93044f8-12ac-4b97-be72-fa998815fe43" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.159388 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wz58m"] Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.169112 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.169425 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlxt5\" (UniqueName: \"kubernetes.io/projected/f93044f8-12ac-4b97-be72-fa998815fe43-kube-api-access-dlxt5\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.169512 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.169552 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.169663 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-config\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.170771 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-config\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.170820 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.171744 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.172374 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.194302 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlxt5\" (UniqueName: \"kubernetes.io/projected/f93044f8-12ac-4b97-be72-fa998815fe43-kube-api-access-dlxt5\") pod \"dnsmasq-dns-7fd796d7df-8qllg\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.199834 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wz58m"] Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.270881 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vc42\" (UniqueName: \"kubernetes.io/projected/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-kube-api-access-2vc42\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.270944 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.270987 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.271056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.273149 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-config\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.375705 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.376137 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.376215 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.376273 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-config\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.376317 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vc42\" (UniqueName: \"kubernetes.io/projected/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-kube-api-access-2vc42\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.394058 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.395035 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-config\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.395057 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.395689 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.404288 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vc42\" (UniqueName: \"kubernetes.io/projected/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-kube-api-access-2vc42\") pod \"dnsmasq-dns-86db49b7ff-wz58m\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.574386 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-twpcm" event={"ID":"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568","Type":"ContainerStarted","Data":"380e0b7c4f26a82c3efdb9aac5f8be3df2f4bff73e3116b08236b0f4b386f5b8"} Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.574493 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.574622 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.574638 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-twpcm" event={"ID":"b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568","Type":"ContainerStarted","Data":"2959d628c0c3cf365ebe334218f7df2793476d78ccc6aec0aea21b51b69a7137"} Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.574940 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.609326 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.609388 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.628237 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-twpcm" podStartSLOduration=12.987869419999999 podStartE2EDuration="24.628216498s" podCreationTimestamp="2025-12-09 17:18:50 +0000 UTC" firstStartedPulling="2025-12-09 17:18:58.151303561 +0000 UTC m=+1365.086042743" lastFinishedPulling="2025-12-09 17:19:09.791650639 +0000 UTC m=+1376.726389821" observedRunningTime="2025-12-09 17:19:14.610532263 +0000 UTC m=+1381.545271445" watchObservedRunningTime="2025-12-09 17:19:14.628216498 +0000 UTC m=+1381.562955680" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.691145 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-ovsdbserver-nb\") pod \"f93044f8-12ac-4b97-be72-fa998815fe43\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.691612 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlxt5\" (UniqueName: \"kubernetes.io/projected/f93044f8-12ac-4b97-be72-fa998815fe43-kube-api-access-dlxt5\") pod \"f93044f8-12ac-4b97-be72-fa998815fe43\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.691678 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-config\") pod \"f93044f8-12ac-4b97-be72-fa998815fe43\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.691752 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-dns-svc\") pod \"f93044f8-12ac-4b97-be72-fa998815fe43\" (UID: \"f93044f8-12ac-4b97-be72-fa998815fe43\") " Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.694298 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f93044f8-12ac-4b97-be72-fa998815fe43" (UID: "f93044f8-12ac-4b97-be72-fa998815fe43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.697085 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-config" (OuterVolumeSpecName: "config") pod "f93044f8-12ac-4b97-be72-fa998815fe43" (UID: "f93044f8-12ac-4b97-be72-fa998815fe43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.697483 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f93044f8-12ac-4b97-be72-fa998815fe43" (UID: "f93044f8-12ac-4b97-be72-fa998815fe43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.699554 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93044f8-12ac-4b97-be72-fa998815fe43-kube-api-access-dlxt5" (OuterVolumeSpecName: "kube-api-access-dlxt5") pod "f93044f8-12ac-4b97-be72-fa998815fe43" (UID: "f93044f8-12ac-4b97-be72-fa998815fe43"). InnerVolumeSpecName "kube-api-access-dlxt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.781616 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6bchm"] Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.805698 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.805751 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlxt5\" (UniqueName: \"kubernetes.io/projected/f93044f8-12ac-4b97-be72-fa998815fe43-kube-api-access-dlxt5\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.805766 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:14 crc kubenswrapper[4853]: I1209 17:19:14.805775 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93044f8-12ac-4b97-be72-fa998815fe43-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.176552 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wz58m"] Dec 09 17:19:15 crc kubenswrapper[4853]: W1209 17:19:15.191583 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54cdd066_306e_4bbb_8dbc_7b01cf8f32f7.slice/crio-2456817c268381079eeffbc72ad809c958ce8ae72c2d5e8d510ddec999e83eb9 WatchSource:0}: Error finding container 2456817c268381079eeffbc72ad809c958ce8ae72c2d5e8d510ddec999e83eb9: Status 404 returned error can't find the container with id 2456817c268381079eeffbc72ad809c958ce8ae72c2d5e8d510ddec999e83eb9 Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.614758 4853 generic.go:334] "Generic (PLEG): container finished" podID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerID="420351ec532ba770033973beec7423451fe4d8ae13d33c06e1d753b6cf88dde8" exitCode=0 Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.615018 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" event={"ID":"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7","Type":"ContainerDied","Data":"420351ec532ba770033973beec7423451fe4d8ae13d33c06e1d753b6cf88dde8"} Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.615045 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" event={"ID":"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7","Type":"ContainerStarted","Data":"2456817c268381079eeffbc72ad809c958ce8ae72c2d5e8d510ddec999e83eb9"} Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.617683 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bchm" event={"ID":"0b54962b-4869-4264-9a51-95ccdb7f3cbf","Type":"ContainerStarted","Data":"4ff94fc286198306866512d5bb3a81e39b64e12ce6f232f6645011c521ac3e27"} Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.629039 4853 generic.go:334] "Generic (PLEG): container finished" podID="dc8bc986-e8a8-467c-8e2a-795c26a74de7" containerID="2c79d319895ed4885bdf89d4bf0f8e690844ba4e3f6891d16ec723c394395381" exitCode=0 Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.629114 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8qllg" Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.629681 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dc8bc986-e8a8-467c-8e2a-795c26a74de7","Type":"ContainerDied","Data":"2c79d319895ed4885bdf89d4bf0f8e690844ba4e3f6891d16ec723c394395381"} Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.782052 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8qllg"] Dec 09 17:19:15 crc kubenswrapper[4853]: I1209 17:19:15.807069 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8qllg"] Dec 09 17:19:16 crc kubenswrapper[4853]: I1209 17:19:16.656070 4853 generic.go:334] "Generic (PLEG): container finished" podID="5cfbc81f-b48d-4790-a213-10daf9f83287" containerID="bd791117fca3756df967baf5aef15cfd93128c8ce055dc203e4564f37222d8b6" exitCode=0 Dec 09 17:19:16 crc kubenswrapper[4853]: I1209 17:19:16.656157 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cfbc81f-b48d-4790-a213-10daf9f83287","Type":"ContainerDied","Data":"bd791117fca3756df967baf5aef15cfd93128c8ce055dc203e4564f37222d8b6"} Dec 09 17:19:17 crc kubenswrapper[4853]: I1209 17:19:17.599729 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93044f8-12ac-4b97-be72-fa998815fe43" path="/var/lib/kubelet/pods/f93044f8-12ac-4b97-be72-fa998815fe43/volumes" Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.676283 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a0106126-8691-4275-82e8-a74d76c6482c","Type":"ContainerStarted","Data":"3e702781de25768373316f25e7f1d48025cb5268c551b97c4ce0b92bd8883d52"} Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.679758 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" event={"ID":"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7","Type":"ContainerStarted","Data":"ee11d28451444e8e1d1d965743c94c769d26a065aed18448c3f99f995ba951b5"} Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.679892 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.681895 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cfbc81f-b48d-4790-a213-10daf9f83287","Type":"ContainerStarted","Data":"a5353d99fa12a879244f08b13481e8355bdda941235c5c20704458f85e0dc93d"} Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.684235 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bchm" event={"ID":"0b54962b-4869-4264-9a51-95ccdb7f3cbf","Type":"ContainerStarted","Data":"30e0fab722a8220c2b95863e0c3adeafc64c5b13b7152bfda8bb66f6dc1ae361"} Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.686378 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4084ccc4-89b3-4be7-a0aa-83f3619a0cb1","Type":"ContainerStarted","Data":"cd1f38c3c6fdd4840a5b0f55d0970b87703ea23a97d79a840fd2e9d34d891f37"} Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.688333 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dc8bc986-e8a8-467c-8e2a-795c26a74de7","Type":"ContainerStarted","Data":"836b2ce6e2b855c3fffa564b316eb60bbadc2daa93e97b2841750c7e7872a5d6"} Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.725946 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.241058117 podStartE2EDuration="37.725921119s" podCreationTimestamp="2025-12-09 17:18:41 +0000 UTC" firstStartedPulling="2025-12-09 17:18:56.851355913 +0000 UTC m=+1363.786095165" lastFinishedPulling="2025-12-09 17:19:09.336218985 +0000 UTC m=+1376.270958167" observedRunningTime="2025-12-09 17:19:18.72168243 +0000 UTC m=+1385.656421652" watchObservedRunningTime="2025-12-09 17:19:18.725921119 +0000 UTC m=+1385.660660321" Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.730128 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.477914214 podStartE2EDuration="26.730112816s" podCreationTimestamp="2025-12-09 17:18:52 +0000 UTC" firstStartedPulling="2025-12-09 17:19:00.318856859 +0000 UTC m=+1367.253596041" lastFinishedPulling="2025-12-09 17:19:17.571055461 +0000 UTC m=+1384.505794643" observedRunningTime="2025-12-09 17:19:18.700709404 +0000 UTC m=+1385.635448596" watchObservedRunningTime="2025-12-09 17:19:18.730112816 +0000 UTC m=+1385.664852018" Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.753793 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6bchm" podStartSLOduration=2.969776602 podStartE2EDuration="5.753768119s" podCreationTimestamp="2025-12-09 17:19:13 +0000 UTC" firstStartedPulling="2025-12-09 17:19:14.834534882 +0000 UTC m=+1381.769274064" lastFinishedPulling="2025-12-09 17:19:17.618526399 +0000 UTC m=+1384.553265581" observedRunningTime="2025-12-09 17:19:18.738955214 +0000 UTC m=+1385.673694416" watchObservedRunningTime="2025-12-09 17:19:18.753768119 +0000 UTC m=+1385.688507321" Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.786492 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.202022425 podStartE2EDuration="36.786475623s" podCreationTimestamp="2025-12-09 17:18:42 +0000 UTC" firstStartedPulling="2025-12-09 17:18:57.463981057 +0000 UTC m=+1364.398720239" lastFinishedPulling="2025-12-09 17:19:10.048434255 +0000 UTC m=+1376.983173437" observedRunningTime="2025-12-09 17:19:18.777822411 +0000 UTC m=+1385.712561593" watchObservedRunningTime="2025-12-09 17:19:18.786475623 +0000 UTC m=+1385.721214805" Dec 09 17:19:18 crc kubenswrapper[4853]: I1209 17:19:18.847472 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.272391998 podStartE2EDuration="29.84744907s" podCreationTimestamp="2025-12-09 17:18:49 +0000 UTC" firstStartedPulling="2025-12-09 17:18:58.995028142 +0000 UTC m=+1365.929767324" lastFinishedPulling="2025-12-09 17:19:17.570085224 +0000 UTC m=+1384.504824396" observedRunningTime="2025-12-09 17:19:18.837048869 +0000 UTC m=+1385.771788061" watchObservedRunningTime="2025-12-09 17:19:18.84744907 +0000 UTC m=+1385.782188252" Dec 09 17:19:19 crc kubenswrapper[4853]: I1209 17:19:19.376516 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 09 17:19:19 crc kubenswrapper[4853]: I1209 17:19:19.698701 4853 generic.go:334] "Generic (PLEG): container finished" podID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerID="ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b" exitCode=0 Dec 09 17:19:19 crc kubenswrapper[4853]: I1209 17:19:19.699166 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerDied","Data":"ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b"} Dec 09 17:19:19 crc kubenswrapper[4853]: I1209 17:19:19.757910 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" podStartSLOduration=5.757885388 podStartE2EDuration="5.757885388s" podCreationTimestamp="2025-12-09 17:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:19:18.867718167 +0000 UTC m=+1385.802457349" watchObservedRunningTime="2025-12-09 17:19:19.757885388 +0000 UTC m=+1386.692624570" Dec 09 17:19:19 crc kubenswrapper[4853]: I1209 17:19:19.840420 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 09 17:19:21 crc kubenswrapper[4853]: I1209 17:19:21.275269 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 09 17:19:21 crc kubenswrapper[4853]: I1209 17:19:21.275562 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 09 17:19:21 crc kubenswrapper[4853]: I1209 17:19:21.314985 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 09 17:19:21 crc kubenswrapper[4853]: I1209 17:19:21.376397 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 09 17:19:21 crc kubenswrapper[4853]: I1209 17:19:21.418289 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 09 17:19:21 crc kubenswrapper[4853]: I1209 17:19:21.764140 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 09 17:19:21 crc kubenswrapper[4853]: I1209 17:19:21.768392 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.068555 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.070454 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.073041 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.073217 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-j9rzq" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.073250 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.073502 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.085736 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.191492 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10737df5-395b-499d-a49d-daac220f432c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.191552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8fr\" (UniqueName: \"kubernetes.io/projected/10737df5-395b-499d-a49d-daac220f432c-kube-api-access-2s8fr\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.191827 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10737df5-395b-499d-a49d-daac220f432c-config\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.191930 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10737df5-395b-499d-a49d-daac220f432c-scripts\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.192130 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10737df5-395b-499d-a49d-daac220f432c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.192181 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/10737df5-395b-499d-a49d-daac220f432c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.192209 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/10737df5-395b-499d-a49d-daac220f432c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.294551 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10737df5-395b-499d-a49d-daac220f432c-config\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.294624 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10737df5-395b-499d-a49d-daac220f432c-scripts\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.294705 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10737df5-395b-499d-a49d-daac220f432c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.294731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/10737df5-395b-499d-a49d-daac220f432c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.294751 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/10737df5-395b-499d-a49d-daac220f432c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.294790 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10737df5-395b-499d-a49d-daac220f432c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.294826 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8fr\" (UniqueName: \"kubernetes.io/projected/10737df5-395b-499d-a49d-daac220f432c-kube-api-access-2s8fr\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.295461 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10737df5-395b-499d-a49d-daac220f432c-config\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.295976 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10737df5-395b-499d-a49d-daac220f432c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.297064 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10737df5-395b-499d-a49d-daac220f432c-scripts\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.300422 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/10737df5-395b-499d-a49d-daac220f432c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.302111 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/10737df5-395b-499d-a49d-daac220f432c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.302562 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10737df5-395b-499d-a49d-daac220f432c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.312934 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8fr\" (UniqueName: \"kubernetes.io/projected/10737df5-395b-499d-a49d-daac220f432c-kube-api-access-2s8fr\") pod \"ovn-northd-0\" (UID: \"10737df5-395b-499d-a49d-daac220f432c\") " pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.402460 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 17:19:22 crc kubenswrapper[4853]: I1209 17:19:22.864906 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 17:19:23 crc kubenswrapper[4853]: I1209 17:19:23.083810 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 09 17:19:23 crc kubenswrapper[4853]: I1209 17:19:23.083859 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 09 17:19:23 crc kubenswrapper[4853]: I1209 17:19:23.760976 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"10737df5-395b-499d-a49d-daac220f432c","Type":"ContainerStarted","Data":"45b6982084fb32a4bc9d6972aae91babfc1c358735edc5f81531ef46c5dda25c"} Dec 09 17:19:24 crc kubenswrapper[4853]: I1209 17:19:24.611823 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:24 crc kubenswrapper[4853]: I1209 17:19:24.656292 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 09 17:19:24 crc kubenswrapper[4853]: I1209 17:19:24.656339 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 09 17:19:24 crc kubenswrapper[4853]: I1209 17:19:24.679225 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cb677"] Dec 09 17:19:24 crc kubenswrapper[4853]: I1209 17:19:24.679490 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" podUID="96c29b56-c7bd-46aa-b130-e34425353476" containerName="dnsmasq-dns" containerID="cri-o://dd9d122b0dfaeccaa054c7d52371f6bf1e327cbd7c389c47cac79978ba5a65e0" gracePeriod=10 Dec 09 17:19:24 crc kubenswrapper[4853]: I1209 17:19:24.782139 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 09 17:19:24 crc kubenswrapper[4853]: I1209 17:19:24.870858 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 09 17:19:25 crc kubenswrapper[4853]: I1209 17:19:25.783142 4853 generic.go:334] "Generic (PLEG): container finished" podID="96c29b56-c7bd-46aa-b130-e34425353476" containerID="dd9d122b0dfaeccaa054c7d52371f6bf1e327cbd7c389c47cac79978ba5a65e0" exitCode=0 Dec 09 17:19:25 crc kubenswrapper[4853]: I1209 17:19:25.783205 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" event={"ID":"96c29b56-c7bd-46aa-b130-e34425353476","Type":"ContainerDied","Data":"dd9d122b0dfaeccaa054c7d52371f6bf1e327cbd7c389c47cac79978ba5a65e0"} Dec 09 17:19:25 crc kubenswrapper[4853]: I1209 17:19:25.794583 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" podUID="96c29b56-c7bd-46aa-b130-e34425353476" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Dec 09 17:19:25 crc kubenswrapper[4853]: I1209 17:19:25.895439 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 09 17:19:25 crc kubenswrapper[4853]: I1209 17:19:25.975940 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.451243 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-qlcff"] Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.454838 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.485702 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-qlcff"] Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.495894 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-7926-account-create-update-2xwpc"] Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.504086 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.508856 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.528902 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.577522 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-7926-account-create-update-2xwpc"] Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.600248 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrh4w\" (UniqueName: \"kubernetes.io/projected/96c29b56-c7bd-46aa-b130-e34425353476-kube-api-access-zrh4w\") pod \"96c29b56-c7bd-46aa-b130-e34425353476\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.600387 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-config\") pod \"96c29b56-c7bd-46aa-b130-e34425353476\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.600424 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-dns-svc\") pod \"96c29b56-c7bd-46aa-b130-e34425353476\" (UID: \"96c29b56-c7bd-46aa-b130-e34425353476\") " Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.600634 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd2vq\" (UniqueName: \"kubernetes.io/projected/cfe99053-f2b5-435d-838e-06a8c59652dc-kube-api-access-cd2vq\") pod \"mysqld-exporter-7926-account-create-update-2xwpc\" (UID: \"cfe99053-f2b5-435d-838e-06a8c59652dc\") " pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.600712 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5sh\" (UniqueName: \"kubernetes.io/projected/60933e3f-3b3c-40ab-a960-188a3b30b2f1-kube-api-access-bl5sh\") pod \"mysqld-exporter-openstack-db-create-qlcff\" (UID: \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\") " pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.600766 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfe99053-f2b5-435d-838e-06a8c59652dc-operator-scripts\") pod \"mysqld-exporter-7926-account-create-update-2xwpc\" (UID: \"cfe99053-f2b5-435d-838e-06a8c59652dc\") " pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.600869 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60933e3f-3b3c-40ab-a960-188a3b30b2f1-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-qlcff\" (UID: \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\") " pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.627684 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c29b56-c7bd-46aa-b130-e34425353476-kube-api-access-zrh4w" (OuterVolumeSpecName: "kube-api-access-zrh4w") pod "96c29b56-c7bd-46aa-b130-e34425353476" (UID: "96c29b56-c7bd-46aa-b130-e34425353476"). InnerVolumeSpecName "kube-api-access-zrh4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.712541 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60933e3f-3b3c-40ab-a960-188a3b30b2f1-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-qlcff\" (UID: \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\") " pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.712963 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd2vq\" (UniqueName: \"kubernetes.io/projected/cfe99053-f2b5-435d-838e-06a8c59652dc-kube-api-access-cd2vq\") pod \"mysqld-exporter-7926-account-create-update-2xwpc\" (UID: \"cfe99053-f2b5-435d-838e-06a8c59652dc\") " pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.713124 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl5sh\" (UniqueName: \"kubernetes.io/projected/60933e3f-3b3c-40ab-a960-188a3b30b2f1-kube-api-access-bl5sh\") pod \"mysqld-exporter-openstack-db-create-qlcff\" (UID: \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\") " pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.713291 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfe99053-f2b5-435d-838e-06a8c59652dc-operator-scripts\") pod \"mysqld-exporter-7926-account-create-update-2xwpc\" (UID: \"cfe99053-f2b5-435d-838e-06a8c59652dc\") " pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.713466 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrh4w\" (UniqueName: \"kubernetes.io/projected/96c29b56-c7bd-46aa-b130-e34425353476-kube-api-access-zrh4w\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.714407 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfe99053-f2b5-435d-838e-06a8c59652dc-operator-scripts\") pod \"mysqld-exporter-7926-account-create-update-2xwpc\" (UID: \"cfe99053-f2b5-435d-838e-06a8c59652dc\") " pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.714527 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60933e3f-3b3c-40ab-a960-188a3b30b2f1-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-qlcff\" (UID: \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\") " pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.762354 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd2vq\" (UniqueName: \"kubernetes.io/projected/cfe99053-f2b5-435d-838e-06a8c59652dc-kube-api-access-cd2vq\") pod \"mysqld-exporter-7926-account-create-update-2xwpc\" (UID: \"cfe99053-f2b5-435d-838e-06a8c59652dc\") " pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.771965 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl5sh\" (UniqueName: \"kubernetes.io/projected/60933e3f-3b3c-40ab-a960-188a3b30b2f1-kube-api-access-bl5sh\") pod \"mysqld-exporter-openstack-db-create-qlcff\" (UID: \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\") " pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.781276 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-5l46m"] Dec 09 17:19:26 crc kubenswrapper[4853]: E1209 17:19:26.781878 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c29b56-c7bd-46aa-b130-e34425353476" containerName="init" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.781896 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c29b56-c7bd-46aa-b130-e34425353476" containerName="init" Dec 09 17:19:26 crc kubenswrapper[4853]: E1209 17:19:26.781910 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c29b56-c7bd-46aa-b130-e34425353476" containerName="dnsmasq-dns" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.781917 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c29b56-c7bd-46aa-b130-e34425353476" containerName="dnsmasq-dns" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.782161 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c29b56-c7bd-46aa-b130-e34425353476" containerName="dnsmasq-dns" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.783478 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.850029 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5l46m"] Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.864667 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "96c29b56-c7bd-46aa-b130-e34425353476" (UID: "96c29b56-c7bd-46aa-b130-e34425353476"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.870070 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.873979 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a2bf6d-2b94-43ab-92e9-7a2355ae7df5","Type":"ContainerStarted","Data":"c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff"} Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.874960 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.895398 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.913327 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-config" (OuterVolumeSpecName: "config") pod "96c29b56-c7bd-46aa-b130-e34425353476" (UID: "96c29b56-c7bd-46aa-b130-e34425353476"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.923907 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.924095 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cb677" event={"ID":"96c29b56-c7bd-46aa-b130-e34425353476","Type":"ContainerDied","Data":"18a3dec7d0d5452b063e7b4f082b6e6f8effc4617ec9356b10807e6bb837d2a0"} Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.924176 4853 scope.go:117] "RemoveContainer" containerID="dd9d122b0dfaeccaa054c7d52371f6bf1e327cbd7c389c47cac79978ba5a65e0" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.925449 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.925531 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-dns-svc\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.925570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.925670 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-config\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.925696 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ktxj\" (UniqueName: \"kubernetes.io/projected/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-kube-api-access-5ktxj\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.925812 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.925829 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96c29b56-c7bd-46aa-b130-e34425353476-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.940734 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.410778653 podStartE2EDuration="40.940717805s" podCreationTimestamp="2025-12-09 17:18:46 +0000 UTC" firstStartedPulling="2025-12-09 17:18:57.434984116 +0000 UTC m=+1364.369723298" lastFinishedPulling="2025-12-09 17:19:25.964923268 +0000 UTC m=+1392.899662450" observedRunningTime="2025-12-09 17:19:26.927162605 +0000 UTC m=+1393.861901787" watchObservedRunningTime="2025-12-09 17:19:26.940717805 +0000 UTC m=+1393.875456987" Dec 09 17:19:26 crc kubenswrapper[4853]: I1209 17:19:26.980841 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cb677"] Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:26.999126 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cb677"] Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.027747 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-config\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.029554 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ktxj\" (UniqueName: \"kubernetes.io/projected/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-kube-api-access-5ktxj\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.028871 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-config\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.030394 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.030530 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-dns-svc\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.030577 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.033992 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.042488 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.043078 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-dns-svc\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.056337 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ktxj\" (UniqueName: \"kubernetes.io/projected/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-kube-api-access-5ktxj\") pod \"dnsmasq-dns-698758b865-5l46m\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.066429 4853 scope.go:117] "RemoveContainer" containerID="a05885a367d6608871d8f12c4993d38ace1ff72d68a6b6251d45f7d0989bdd8b" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.222100 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.587975 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c29b56-c7bd-46aa-b130-e34425353476" path="/var/lib/kubelet/pods/96c29b56-c7bd-46aa-b130-e34425353476/volumes" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.605623 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-qlcff"] Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.788978 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.801927 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.803760 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-7926-account-create-update-2xwpc"] Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.803912 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-wdd6n" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.804973 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.805050 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.805131 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.814588 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 17:19:27 crc kubenswrapper[4853]: W1209 17:19:27.891862 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60933e3f_3b3c_40ab_a960_188a3b30b2f1.slice/crio-13ab6ae3bc924e14357d16c3397f2573a8e159dd26c2a952275c42a394ed495c WatchSource:0}: Error finding container 13ab6ae3bc924e14357d16c3397f2573a8e159dd26c2a952275c42a394ed495c: Status 404 returned error can't find the container with id 13ab6ae3bc924e14357d16c3397f2573a8e159dd26c2a952275c42a394ed495c Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.960359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-qlcff" event={"ID":"60933e3f-3b3c-40ab-a960-188a3b30b2f1","Type":"ContainerStarted","Data":"13ab6ae3bc924e14357d16c3397f2573a8e159dd26c2a952275c42a394ed495c"} Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.963237 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" event={"ID":"cfe99053-f2b5-435d-838e-06a8c59652dc","Type":"ContainerStarted","Data":"3f209ee93b16b8352824e929572794fe00683a6f2d292914091269b96b4c5d36"} Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.967578 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlcfn\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-kube-api-access-xlcfn\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.967693 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.967720 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-cache\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.967880 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-lock\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:27 crc kubenswrapper[4853]: I1209 17:19:27.968065 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.069876 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlcfn\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-kube-api-access-xlcfn\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.070105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.070129 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-cache\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.070204 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-lock\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.070320 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: E1209 17:19:28.071371 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 17:19:28 crc kubenswrapper[4853]: E1209 17:19:28.071385 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 17:19:28 crc kubenswrapper[4853]: E1209 17:19:28.071420 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift podName:2f6e868f-f4bc-42d3-bbe6-2a391e2b768d nodeName:}" failed. No retries permitted until 2025-12-09 17:19:28.571404957 +0000 UTC m=+1395.506144139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift") pod "swift-storage-0" (UID: "2f6e868f-f4bc-42d3-bbe6-2a391e2b768d") : configmap "swift-ring-files" not found Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.071795 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-cache\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.071989 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-lock\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.072315 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.092991 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlcfn\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-kube-api-access-xlcfn\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.105878 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.485049 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5l46m"] Dec 09 17:19:28 crc kubenswrapper[4853]: W1209 17:19:28.490191 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca2a9b0d_f643_4e8a_8076_b76e5a8e703c.slice/crio-d816e16e384c43fa8dd8495e0d8f3ae4a1ffc89f0c038f607c2a6d0c290a9e30 WatchSource:0}: Error finding container d816e16e384c43fa8dd8495e0d8f3ae4a1ffc89f0c038f607c2a6d0c290a9e30: Status 404 returned error can't find the container with id d816e16e384c43fa8dd8495e0d8f3ae4a1ffc89f0c038f607c2a6d0c290a9e30 Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.601069 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:28 crc kubenswrapper[4853]: E1209 17:19:28.601392 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 17:19:28 crc kubenswrapper[4853]: E1209 17:19:28.601410 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 17:19:28 crc kubenswrapper[4853]: E1209 17:19:28.601462 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift podName:2f6e868f-f4bc-42d3-bbe6-2a391e2b768d nodeName:}" failed. No retries permitted until 2025-12-09 17:19:29.60144668 +0000 UTC m=+1396.536185862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift") pod "swift-storage-0" (UID: "2f6e868f-f4bc-42d3-bbe6-2a391e2b768d") : configmap "swift-ring-files" not found Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.601809 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.601847 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.982119 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" event={"ID":"cfe99053-f2b5-435d-838e-06a8c59652dc","Type":"ContainerStarted","Data":"3292ad8070a7a77535cd8f0075363bae040c18c86bc2edcd1c123e6abfb03b28"} Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.987919 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"10737df5-395b-499d-a49d-daac220f432c","Type":"ContainerStarted","Data":"858acbaceb59e87e3365e8a1b246aee37204fcc9b8808f1db678c84150c870eb"} Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.990033 4853 generic.go:334] "Generic (PLEG): container finished" podID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerID="5905aecb83fc657c85c76099cef84761ecf6d4d4eb423b1d76db1eda66bacb4f" exitCode=0 Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.990077 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5l46m" event={"ID":"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c","Type":"ContainerDied","Data":"5905aecb83fc657c85c76099cef84761ecf6d4d4eb423b1d76db1eda66bacb4f"} Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.990095 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5l46m" event={"ID":"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c","Type":"ContainerStarted","Data":"d816e16e384c43fa8dd8495e0d8f3ae4a1ffc89f0c038f607c2a6d0c290a9e30"} Dec 09 17:19:28 crc kubenswrapper[4853]: I1209 17:19:28.993126 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-qlcff" event={"ID":"60933e3f-3b3c-40ab-a960-188a3b30b2f1","Type":"ContainerStarted","Data":"08d067cd23c628d314ccfeed5149e0d5bbfcb9c085541a0e898ec8eefc7e0b06"} Dec 09 17:19:29 crc kubenswrapper[4853]: I1209 17:19:29.018485 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" podStartSLOduration=3.01846481 podStartE2EDuration="3.01846481s" podCreationTimestamp="2025-12-09 17:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:19:29.003786709 +0000 UTC m=+1395.938525921" watchObservedRunningTime="2025-12-09 17:19:29.01846481 +0000 UTC m=+1395.953203992" Dec 09 17:19:29 crc kubenswrapper[4853]: I1209 17:19:29.056790 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-qlcff" podStartSLOduration=3.056767321 podStartE2EDuration="3.056767321s" podCreationTimestamp="2025-12-09 17:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:19:29.053960702 +0000 UTC m=+1395.988699874" watchObservedRunningTime="2025-12-09 17:19:29.056767321 +0000 UTC m=+1395.991506503" Dec 09 17:19:29 crc kubenswrapper[4853]: E1209 17:19:29.416958 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfe99053_f2b5_435d_838e_06a8c59652dc.slice/crio-3292ad8070a7a77535cd8f0075363bae040c18c86bc2edcd1c123e6abfb03b28.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60933e3f_3b3c_40ab_a960_188a3b30b2f1.slice/crio-conmon-08d067cd23c628d314ccfeed5149e0d5bbfcb9c085541a0e898ec8eefc7e0b06.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfe99053_f2b5_435d_838e_06a8c59652dc.slice/crio-conmon-3292ad8070a7a77535cd8f0075363bae040c18c86bc2edcd1c123e6abfb03b28.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60933e3f_3b3c_40ab_a960_188a3b30b2f1.slice/crio-08d067cd23c628d314ccfeed5149e0d5bbfcb9c085541a0e898ec8eefc7e0b06.scope\": RecentStats: unable to find data in memory cache]" Dec 09 17:19:29 crc kubenswrapper[4853]: I1209 17:19:29.632764 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:29 crc kubenswrapper[4853]: E1209 17:19:29.632983 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 17:19:29 crc kubenswrapper[4853]: E1209 17:19:29.633015 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 17:19:29 crc kubenswrapper[4853]: E1209 17:19:29.633075 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift podName:2f6e868f-f4bc-42d3-bbe6-2a391e2b768d nodeName:}" failed. No retries permitted until 2025-12-09 17:19:31.633057868 +0000 UTC m=+1398.567797050 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift") pod "swift-storage-0" (UID: "2f6e868f-f4bc-42d3-bbe6-2a391e2b768d") : configmap "swift-ring-files" not found Dec 09 17:19:29 crc kubenswrapper[4853]: I1209 17:19:29.995495 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-slx9d"] Dec 09 17:19:29 crc kubenswrapper[4853]: I1209 17:19:29.997115 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-slx9d" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.005239 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"10737df5-395b-499d-a49d-daac220f432c","Type":"ContainerStarted","Data":"6f0ba42f1434def03404d77a3fca590f13ceaa0ced6434d5dd8c2d40f6030e84"} Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.006518 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.012546 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5l46m" event={"ID":"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c","Type":"ContainerStarted","Data":"f32a35b086e097d4708e93d68901d5f0e9cdc6931f1cb14d399cd8b5c6cecc2b"} Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.012750 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.022760 4853 generic.go:334] "Generic (PLEG): container finished" podID="60933e3f-3b3c-40ab-a960-188a3b30b2f1" containerID="08d067cd23c628d314ccfeed5149e0d5bbfcb9c085541a0e898ec8eefc7e0b06" exitCode=0 Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.022840 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-qlcff" event={"ID":"60933e3f-3b3c-40ab-a960-188a3b30b2f1","Type":"ContainerDied","Data":"08d067cd23c628d314ccfeed5149e0d5bbfcb9c085541a0e898ec8eefc7e0b06"} Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.025217 4853 generic.go:334] "Generic (PLEG): container finished" podID="cfe99053-f2b5-435d-838e-06a8c59652dc" containerID="3292ad8070a7a77535cd8f0075363bae040c18c86bc2edcd1c123e6abfb03b28" exitCode=0 Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.025287 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" event={"ID":"cfe99053-f2b5-435d-838e-06a8c59652dc","Type":"ContainerDied","Data":"3292ad8070a7a77535cd8f0075363bae040c18c86bc2edcd1c123e6abfb03b28"} Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.029325 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-slx9d"] Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.043705 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-5l46m" podStartSLOduration=4.04367282 podStartE2EDuration="4.04367282s" podCreationTimestamp="2025-12-09 17:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:19:30.041474898 +0000 UTC m=+1396.976214090" watchObservedRunningTime="2025-12-09 17:19:30.04367282 +0000 UTC m=+1396.978412002" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.093674 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.00302138 podStartE2EDuration="8.093647248s" podCreationTimestamp="2025-12-09 17:19:22 +0000 UTC" firstStartedPulling="2025-12-09 17:19:22.872740145 +0000 UTC m=+1389.807479347" lastFinishedPulling="2025-12-09 17:19:27.963366033 +0000 UTC m=+1394.898105215" observedRunningTime="2025-12-09 17:19:30.061416626 +0000 UTC m=+1396.996155808" watchObservedRunningTime="2025-12-09 17:19:30.093647248 +0000 UTC m=+1397.028386430" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.140454 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1a56-account-create-update-prr4z"] Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.144398 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqxk\" (UniqueName: \"kubernetes.io/projected/1b36e250-d75c-41bf-a8a6-51c84ec06406-kube-api-access-hcqxk\") pod \"glance-db-create-slx9d\" (UID: \"1b36e250-d75c-41bf-a8a6-51c84ec06406\") " pod="openstack/glance-db-create-slx9d" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.144839 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b36e250-d75c-41bf-a8a6-51c84ec06406-operator-scripts\") pod \"glance-db-create-slx9d\" (UID: \"1b36e250-d75c-41bf-a8a6-51c84ec06406\") " pod="openstack/glance-db-create-slx9d" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.149307 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1a56-account-create-update-prr4z"] Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.154779 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.156935 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.247090 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcqxk\" (UniqueName: \"kubernetes.io/projected/1b36e250-d75c-41bf-a8a6-51c84ec06406-kube-api-access-hcqxk\") pod \"glance-db-create-slx9d\" (UID: \"1b36e250-d75c-41bf-a8a6-51c84ec06406\") " pod="openstack/glance-db-create-slx9d" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.247854 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033e773c-398b-493a-92aa-464307b11906-operator-scripts\") pod \"glance-1a56-account-create-update-prr4z\" (UID: \"033e773c-398b-493a-92aa-464307b11906\") " pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.248093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zrbl\" (UniqueName: \"kubernetes.io/projected/033e773c-398b-493a-92aa-464307b11906-kube-api-access-4zrbl\") pod \"glance-1a56-account-create-update-prr4z\" (UID: \"033e773c-398b-493a-92aa-464307b11906\") " pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.248237 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b36e250-d75c-41bf-a8a6-51c84ec06406-operator-scripts\") pod \"glance-db-create-slx9d\" (UID: \"1b36e250-d75c-41bf-a8a6-51c84ec06406\") " pod="openstack/glance-db-create-slx9d" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.249327 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b36e250-d75c-41bf-a8a6-51c84ec06406-operator-scripts\") pod \"glance-db-create-slx9d\" (UID: \"1b36e250-d75c-41bf-a8a6-51c84ec06406\") " pod="openstack/glance-db-create-slx9d" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.269226 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcqxk\" (UniqueName: \"kubernetes.io/projected/1b36e250-d75c-41bf-a8a6-51c84ec06406-kube-api-access-hcqxk\") pod \"glance-db-create-slx9d\" (UID: \"1b36e250-d75c-41bf-a8a6-51c84ec06406\") " pod="openstack/glance-db-create-slx9d" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.318103 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-slx9d" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.350679 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zrbl\" (UniqueName: \"kubernetes.io/projected/033e773c-398b-493a-92aa-464307b11906-kube-api-access-4zrbl\") pod \"glance-1a56-account-create-update-prr4z\" (UID: \"033e773c-398b-493a-92aa-464307b11906\") " pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.350892 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033e773c-398b-493a-92aa-464307b11906-operator-scripts\") pod \"glance-1a56-account-create-update-prr4z\" (UID: \"033e773c-398b-493a-92aa-464307b11906\") " pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.351790 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033e773c-398b-493a-92aa-464307b11906-operator-scripts\") pod \"glance-1a56-account-create-update-prr4z\" (UID: \"033e773c-398b-493a-92aa-464307b11906\") " pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.369887 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zrbl\" (UniqueName: \"kubernetes.io/projected/033e773c-398b-493a-92aa-464307b11906-kube-api-access-4zrbl\") pod \"glance-1a56-account-create-update-prr4z\" (UID: \"033e773c-398b-493a-92aa-464307b11906\") " pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.481509 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.851261 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-slx9d"] Dec 09 17:19:30 crc kubenswrapper[4853]: W1209 17:19:30.859827 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b36e250_d75c_41bf_a8a6_51c84ec06406.slice/crio-07c50aa2616ae9e535c1259e504e7c0a8fe98159d61c48117aedf06f552dc619 WatchSource:0}: Error finding container 07c50aa2616ae9e535c1259e504e7c0a8fe98159d61c48117aedf06f552dc619: Status 404 returned error can't find the container with id 07c50aa2616ae9e535c1259e504e7c0a8fe98159d61c48117aedf06f552dc619 Dec 09 17:19:30 crc kubenswrapper[4853]: I1209 17:19:30.987798 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1a56-account-create-update-prr4z"] Dec 09 17:19:31 crc kubenswrapper[4853]: W1209 17:19:30.999966 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod033e773c_398b_493a_92aa_464307b11906.slice/crio-af9703d8751a1df288fb85159f61588bee3831954e200f21e7da37a8d2d00acc WatchSource:0}: Error finding container af9703d8751a1df288fb85159f61588bee3831954e200f21e7da37a8d2d00acc: Status 404 returned error can't find the container with id af9703d8751a1df288fb85159f61588bee3831954e200f21e7da37a8d2d00acc Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.035129 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-slx9d" event={"ID":"1b36e250-d75c-41bf-a8a6-51c84ec06406","Type":"ContainerStarted","Data":"5177ef060f920e68cdca536523b5eb38e8d1770cf3a3736255b82cf78e9d8c26"} Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.035182 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-slx9d" event={"ID":"1b36e250-d75c-41bf-a8a6-51c84ec06406","Type":"ContainerStarted","Data":"07c50aa2616ae9e535c1259e504e7c0a8fe98159d61c48117aedf06f552dc619"} Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.036540 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1a56-account-create-update-prr4z" event={"ID":"033e773c-398b-493a-92aa-464307b11906","Type":"ContainerStarted","Data":"af9703d8751a1df288fb85159f61588bee3831954e200f21e7da37a8d2d00acc"} Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.071455 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-slx9d" podStartSLOduration=2.071428831 podStartE2EDuration="2.071428831s" podCreationTimestamp="2025-12-09 17:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:19:31.048501039 +0000 UTC m=+1397.983240241" watchObservedRunningTime="2025-12-09 17:19:31.071428831 +0000 UTC m=+1398.006168013" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.630260 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.682981 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.697379 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfe99053-f2b5-435d-838e-06a8c59652dc-operator-scripts\") pod \"cfe99053-f2b5-435d-838e-06a8c59652dc\" (UID: \"cfe99053-f2b5-435d-838e-06a8c59652dc\") " Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.697442 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd2vq\" (UniqueName: \"kubernetes.io/projected/cfe99053-f2b5-435d-838e-06a8c59652dc-kube-api-access-cd2vq\") pod \"cfe99053-f2b5-435d-838e-06a8c59652dc\" (UID: \"cfe99053-f2b5-435d-838e-06a8c59652dc\") " Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.699537 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.700999 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe99053-f2b5-435d-838e-06a8c59652dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfe99053-f2b5-435d-838e-06a8c59652dc" (UID: "cfe99053-f2b5-435d-838e-06a8c59652dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.705050 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v2tq9"] Dec 09 17:19:31 crc kubenswrapper[4853]: E1209 17:19:31.705472 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60933e3f-3b3c-40ab-a960-188a3b30b2f1" containerName="mariadb-database-create" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.705493 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="60933e3f-3b3c-40ab-a960-188a3b30b2f1" containerName="mariadb-database-create" Dec 09 17:19:31 crc kubenswrapper[4853]: E1209 17:19:31.705510 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe99053-f2b5-435d-838e-06a8c59652dc" containerName="mariadb-account-create-update" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.705518 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe99053-f2b5-435d-838e-06a8c59652dc" containerName="mariadb-account-create-update" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.705736 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="60933e3f-3b3c-40ab-a960-188a3b30b2f1" containerName="mariadb-database-create" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.705866 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe99053-f2b5-435d-838e-06a8c59652dc" containerName="mariadb-account-create-update" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.706557 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: E1209 17:19:31.708704 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 17:19:31 crc kubenswrapper[4853]: E1209 17:19:31.708739 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 17:19:31 crc kubenswrapper[4853]: E1209 17:19:31.708846 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift podName:2f6e868f-f4bc-42d3-bbe6-2a391e2b768d nodeName:}" failed. No retries permitted until 2025-12-09 17:19:35.708824488 +0000 UTC m=+1402.643563670 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift") pod "swift-storage-0" (UID: "2f6e868f-f4bc-42d3-bbe6-2a391e2b768d") : configmap "swift-ring-files" not found Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.711234 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe99053-f2b5-435d-838e-06a8c59652dc-kube-api-access-cd2vq" (OuterVolumeSpecName: "kube-api-access-cd2vq") pod "cfe99053-f2b5-435d-838e-06a8c59652dc" (UID: "cfe99053-f2b5-435d-838e-06a8c59652dc"). InnerVolumeSpecName "kube-api-access-cd2vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.713842 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.714167 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.714285 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.728528 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v2tq9"] Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.801588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60933e3f-3b3c-40ab-a960-188a3b30b2f1-operator-scripts\") pod \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\" (UID: \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\") " Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.802053 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60933e3f-3b3c-40ab-a960-188a3b30b2f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60933e3f-3b3c-40ab-a960-188a3b30b2f1" (UID: "60933e3f-3b3c-40ab-a960-188a3b30b2f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.803222 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl5sh\" (UniqueName: \"kubernetes.io/projected/60933e3f-3b3c-40ab-a960-188a3b30b2f1-kube-api-access-bl5sh\") pod \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\" (UID: \"60933e3f-3b3c-40ab-a960-188a3b30b2f1\") " Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.803859 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-scripts\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.805875 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-swiftconf\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806004 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-combined-ca-bundle\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806190 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/93e3401b-eae8-4c50-a73b-686525de14a2-kube-api-access-6gqdq\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806348 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-ring-data-devices\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806408 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60933e3f-3b3c-40ab-a960-188a3b30b2f1-kube-api-access-bl5sh" (OuterVolumeSpecName: "kube-api-access-bl5sh") pod "60933e3f-3b3c-40ab-a960-188a3b30b2f1" (UID: "60933e3f-3b3c-40ab-a960-188a3b30b2f1"). InnerVolumeSpecName "kube-api-access-bl5sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806432 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-dispersionconf\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806461 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93e3401b-eae8-4c50-a73b-686525de14a2-etc-swift\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806926 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60933e3f-3b3c-40ab-a960-188a3b30b2f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806950 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfe99053-f2b5-435d-838e-06a8c59652dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806961 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd2vq\" (UniqueName: \"kubernetes.io/projected/cfe99053-f2b5-435d-838e-06a8c59652dc-kube-api-access-cd2vq\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.806975 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl5sh\" (UniqueName: \"kubernetes.io/projected/60933e3f-3b3c-40ab-a960-188a3b30b2f1-kube-api-access-bl5sh\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.909057 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-dispersionconf\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.909121 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93e3401b-eae8-4c50-a73b-686525de14a2-etc-swift\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.909338 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-scripts\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.909422 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-swiftconf\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.909466 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-combined-ca-bundle\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.909533 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/93e3401b-eae8-4c50-a73b-686525de14a2-kube-api-access-6gqdq\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.909609 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-ring-data-devices\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.910527 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93e3401b-eae8-4c50-a73b-686525de14a2-etc-swift\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.911015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-scripts\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.911309 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-ring-data-devices\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.916828 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-combined-ca-bundle\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.916943 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-dispersionconf\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.919148 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-swiftconf\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:31 crc kubenswrapper[4853]: I1209 17:19:31.929434 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/93e3401b-eae8-4c50-a73b-686525de14a2-kube-api-access-6gqdq\") pod \"swift-ring-rebalance-v2tq9\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.051448 4853 generic.go:334] "Generic (PLEG): container finished" podID="033e773c-398b-493a-92aa-464307b11906" containerID="2690f02c51eb5bd10362927c848067cfd3c4487fa7afa88c35c89a0ddfb1d1ac" exitCode=0 Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.051528 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1a56-account-create-update-prr4z" event={"ID":"033e773c-398b-493a-92aa-464307b11906","Type":"ContainerDied","Data":"2690f02c51eb5bd10362927c848067cfd3c4487fa7afa88c35c89a0ddfb1d1ac"} Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.054228 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-qlcff" event={"ID":"60933e3f-3b3c-40ab-a960-188a3b30b2f1","Type":"ContainerDied","Data":"13ab6ae3bc924e14357d16c3397f2573a8e159dd26c2a952275c42a394ed495c"} Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.054262 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13ab6ae3bc924e14357d16c3397f2573a8e159dd26c2a952275c42a394ed495c" Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.054316 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-qlcff" Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.064959 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.068687 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" event={"ID":"cfe99053-f2b5-435d-838e-06a8c59652dc","Type":"ContainerDied","Data":"3f209ee93b16b8352824e929572794fe00683a6f2d292914091269b96b4c5d36"} Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.068729 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f209ee93b16b8352824e929572794fe00683a6f2d292914091269b96b4c5d36" Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.068789 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-7926-account-create-update-2xwpc" Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.081450 4853 generic.go:334] "Generic (PLEG): container finished" podID="1b36e250-d75c-41bf-a8a6-51c84ec06406" containerID="5177ef060f920e68cdca536523b5eb38e8d1770cf3a3736255b82cf78e9d8c26" exitCode=0 Dec 09 17:19:32 crc kubenswrapper[4853]: I1209 17:19:32.081492 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-slx9d" event={"ID":"1b36e250-d75c-41bf-a8a6-51c84ec06406","Type":"ContainerDied","Data":"5177ef060f920e68cdca536523b5eb38e8d1770cf3a3736255b82cf78e9d8c26"} Dec 09 17:19:33 crc kubenswrapper[4853]: I1209 17:19:33.603530 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-658f7d5d7b-lbkxw" podUID="511cc5d6-eccf-443e-92c3-8b0af97b904a" containerName="console" containerID="cri-o://93272ba077b8ccfdf9f1ffda9af50df1405300d2633303a3a565e16f2de6ec4d" gracePeriod=15 Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.106148 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-658f7d5d7b-lbkxw_511cc5d6-eccf-443e-92c3-8b0af97b904a/console/0.log" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.106202 4853 generic.go:334] "Generic (PLEG): container finished" podID="511cc5d6-eccf-443e-92c3-8b0af97b904a" containerID="93272ba077b8ccfdf9f1ffda9af50df1405300d2633303a3a565e16f2de6ec4d" exitCode=2 Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.106239 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-658f7d5d7b-lbkxw" event={"ID":"511cc5d6-eccf-443e-92c3-8b0af97b904a","Type":"ContainerDied","Data":"93272ba077b8ccfdf9f1ffda9af50df1405300d2633303a3a565e16f2de6ec4d"} Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.397790 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nq7h2"] Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.399793 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.408151 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nq7h2"] Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.501407 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-36df-account-create-update-qs6kt"] Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.503041 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.505766 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.511501 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-36df-account-create-update-qs6kt"] Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.577194 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9117d61-707d-4f83-9a09-eaf1c26c1b11-operator-scripts\") pod \"keystone-db-create-nq7h2\" (UID: \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\") " pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.577385 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtmrv\" (UniqueName: \"kubernetes.io/projected/b9117d61-707d-4f83-9a09-eaf1c26c1b11-kube-api-access-jtmrv\") pod \"keystone-db-create-nq7h2\" (UID: \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\") " pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.679029 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21c5479-b7f8-47f1-9503-ae75e212fe56-operator-scripts\") pod \"keystone-36df-account-create-update-qs6kt\" (UID: \"d21c5479-b7f8-47f1-9503-ae75e212fe56\") " pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.680100 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9117d61-707d-4f83-9a09-eaf1c26c1b11-operator-scripts\") pod \"keystone-db-create-nq7h2\" (UID: \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\") " pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.680857 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9117d61-707d-4f83-9a09-eaf1c26c1b11-operator-scripts\") pod \"keystone-db-create-nq7h2\" (UID: \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\") " pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.681513 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtmrv\" (UniqueName: \"kubernetes.io/projected/b9117d61-707d-4f83-9a09-eaf1c26c1b11-kube-api-access-jtmrv\") pod \"keystone-db-create-nq7h2\" (UID: \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\") " pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.681763 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fhmq\" (UniqueName: \"kubernetes.io/projected/d21c5479-b7f8-47f1-9503-ae75e212fe56-kube-api-access-6fhmq\") pod \"keystone-36df-account-create-update-qs6kt\" (UID: \"d21c5479-b7f8-47f1-9503-ae75e212fe56\") " pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.706463 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtmrv\" (UniqueName: \"kubernetes.io/projected/b9117d61-707d-4f83-9a09-eaf1c26c1b11-kube-api-access-jtmrv\") pod \"keystone-db-create-nq7h2\" (UID: \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\") " pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.727468 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.759613 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-dv9px"] Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.761653 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dv9px" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.773712 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dv9px"] Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.784105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fhmq\" (UniqueName: \"kubernetes.io/projected/d21c5479-b7f8-47f1-9503-ae75e212fe56-kube-api-access-6fhmq\") pod \"keystone-36df-account-create-update-qs6kt\" (UID: \"d21c5479-b7f8-47f1-9503-ae75e212fe56\") " pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.785646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21c5479-b7f8-47f1-9503-ae75e212fe56-operator-scripts\") pod \"keystone-36df-account-create-update-qs6kt\" (UID: \"d21c5479-b7f8-47f1-9503-ae75e212fe56\") " pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.786646 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21c5479-b7f8-47f1-9503-ae75e212fe56-operator-scripts\") pod \"keystone-36df-account-create-update-qs6kt\" (UID: \"d21c5479-b7f8-47f1-9503-ae75e212fe56\") " pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.809019 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fhmq\" (UniqueName: \"kubernetes.io/projected/d21c5479-b7f8-47f1-9503-ae75e212fe56-kube-api-access-6fhmq\") pod \"keystone-36df-account-create-update-qs6kt\" (UID: \"d21c5479-b7f8-47f1-9503-ae75e212fe56\") " pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.823345 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.875110 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6cff-account-create-update-nzq9s"] Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.876621 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.879613 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.887810 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtvgs\" (UniqueName: \"kubernetes.io/projected/170cafdb-5ab6-47f8-ba66-0398e9ca3904-kube-api-access-mtvgs\") pod \"placement-db-create-dv9px\" (UID: \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\") " pod="openstack/placement-db-create-dv9px" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.888085 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170cafdb-5ab6-47f8-ba66-0398e9ca3904-operator-scripts\") pod \"placement-db-create-dv9px\" (UID: \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\") " pod="openstack/placement-db-create-dv9px" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.907691 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cff-account-create-update-nzq9s"] Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.991211 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cf06379-f14c-4652-9768-459276512e7f-operator-scripts\") pod \"placement-6cff-account-create-update-nzq9s\" (UID: \"8cf06379-f14c-4652-9768-459276512e7f\") " pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.991275 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170cafdb-5ab6-47f8-ba66-0398e9ca3904-operator-scripts\") pod \"placement-db-create-dv9px\" (UID: \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\") " pod="openstack/placement-db-create-dv9px" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.991338 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtvgs\" (UniqueName: \"kubernetes.io/projected/170cafdb-5ab6-47f8-ba66-0398e9ca3904-kube-api-access-mtvgs\") pod \"placement-db-create-dv9px\" (UID: \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\") " pod="openstack/placement-db-create-dv9px" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.991438 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmn4j\" (UniqueName: \"kubernetes.io/projected/8cf06379-f14c-4652-9768-459276512e7f-kube-api-access-vmn4j\") pod \"placement-6cff-account-create-update-nzq9s\" (UID: \"8cf06379-f14c-4652-9768-459276512e7f\") " pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:34 crc kubenswrapper[4853]: I1209 17:19:34.993298 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170cafdb-5ab6-47f8-ba66-0398e9ca3904-operator-scripts\") pod \"placement-db-create-dv9px\" (UID: \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\") " pod="openstack/placement-db-create-dv9px" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.012726 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtvgs\" (UniqueName: \"kubernetes.io/projected/170cafdb-5ab6-47f8-ba66-0398e9ca3904-kube-api-access-mtvgs\") pod \"placement-db-create-dv9px\" (UID: \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\") " pod="openstack/placement-db-create-dv9px" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.075661 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-slx9d" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.092775 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cf06379-f14c-4652-9768-459276512e7f-operator-scripts\") pod \"placement-6cff-account-create-update-nzq9s\" (UID: \"8cf06379-f14c-4652-9768-459276512e7f\") " pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.093280 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmn4j\" (UniqueName: \"kubernetes.io/projected/8cf06379-f14c-4652-9768-459276512e7f-kube-api-access-vmn4j\") pod \"placement-6cff-account-create-update-nzq9s\" (UID: \"8cf06379-f14c-4652-9768-459276512e7f\") " pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.098698 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cf06379-f14c-4652-9768-459276512e7f-operator-scripts\") pod \"placement-6cff-account-create-update-nzq9s\" (UID: \"8cf06379-f14c-4652-9768-459276512e7f\") " pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.124239 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.126807 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-slx9d" event={"ID":"1b36e250-d75c-41bf-a8a6-51c84ec06406","Type":"ContainerDied","Data":"07c50aa2616ae9e535c1259e504e7c0a8fe98159d61c48117aedf06f552dc619"} Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.126853 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c50aa2616ae9e535c1259e504e7c0a8fe98159d61c48117aedf06f552dc619" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.126911 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-slx9d" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.130251 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmn4j\" (UniqueName: \"kubernetes.io/projected/8cf06379-f14c-4652-9768-459276512e7f-kube-api-access-vmn4j\") pod \"placement-6cff-account-create-update-nzq9s\" (UID: \"8cf06379-f14c-4652-9768-459276512e7f\") " pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.132924 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1a56-account-create-update-prr4z" event={"ID":"033e773c-398b-493a-92aa-464307b11906","Type":"ContainerDied","Data":"af9703d8751a1df288fb85159f61588bee3831954e200f21e7da37a8d2d00acc"} Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.132956 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9703d8751a1df288fb85159f61588bee3831954e200f21e7da37a8d2d00acc" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.133006 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1a56-account-create-update-prr4z" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.152715 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dv9px" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.176445 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.194678 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b36e250-d75c-41bf-a8a6-51c84ec06406-operator-scripts\") pod \"1b36e250-d75c-41bf-a8a6-51c84ec06406\" (UID: \"1b36e250-d75c-41bf-a8a6-51c84ec06406\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.194848 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcqxk\" (UniqueName: \"kubernetes.io/projected/1b36e250-d75c-41bf-a8a6-51c84ec06406-kube-api-access-hcqxk\") pod \"1b36e250-d75c-41bf-a8a6-51c84ec06406\" (UID: \"1b36e250-d75c-41bf-a8a6-51c84ec06406\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.195505 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b36e250-d75c-41bf-a8a6-51c84ec06406-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b36e250-d75c-41bf-a8a6-51c84ec06406" (UID: "1b36e250-d75c-41bf-a8a6-51c84ec06406"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.195857 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b36e250-d75c-41bf-a8a6-51c84ec06406-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.200016 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b36e250-d75c-41bf-a8a6-51c84ec06406-kube-api-access-hcqxk" (OuterVolumeSpecName: "kube-api-access-hcqxk") pod "1b36e250-d75c-41bf-a8a6-51c84ec06406" (UID: "1b36e250-d75c-41bf-a8a6-51c84ec06406"). InnerVolumeSpecName "kube-api-access-hcqxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.214906 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-658f7d5d7b-lbkxw_511cc5d6-eccf-443e-92c3-8b0af97b904a/console/0.log" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.214991 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.297318 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zrbl\" (UniqueName: \"kubernetes.io/projected/033e773c-398b-493a-92aa-464307b11906-kube-api-access-4zrbl\") pod \"033e773c-398b-493a-92aa-464307b11906\" (UID: \"033e773c-398b-493a-92aa-464307b11906\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.297386 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033e773c-398b-493a-92aa-464307b11906-operator-scripts\") pod \"033e773c-398b-493a-92aa-464307b11906\" (UID: \"033e773c-398b-493a-92aa-464307b11906\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.298037 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcqxk\" (UniqueName: \"kubernetes.io/projected/1b36e250-d75c-41bf-a8a6-51c84ec06406-kube-api-access-hcqxk\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.298519 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/033e773c-398b-493a-92aa-464307b11906-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "033e773c-398b-493a-92aa-464307b11906" (UID: "033e773c-398b-493a-92aa-464307b11906"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.309372 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/033e773c-398b-493a-92aa-464307b11906-kube-api-access-4zrbl" (OuterVolumeSpecName: "kube-api-access-4zrbl") pod "033e773c-398b-493a-92aa-464307b11906" (UID: "033e773c-398b-493a-92aa-464307b11906"). InnerVolumeSpecName "kube-api-access-4zrbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.399837 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz6qh\" (UniqueName: \"kubernetes.io/projected/511cc5d6-eccf-443e-92c3-8b0af97b904a-kube-api-access-mz6qh\") pod \"511cc5d6-eccf-443e-92c3-8b0af97b904a\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.400303 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-oauth-serving-cert\") pod \"511cc5d6-eccf-443e-92c3-8b0af97b904a\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.400369 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-service-ca\") pod \"511cc5d6-eccf-443e-92c3-8b0af97b904a\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.400551 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-trusted-ca-bundle\") pod \"511cc5d6-eccf-443e-92c3-8b0af97b904a\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.400704 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-config\") pod \"511cc5d6-eccf-443e-92c3-8b0af97b904a\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.400852 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-oauth-config\") pod \"511cc5d6-eccf-443e-92c3-8b0af97b904a\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.400884 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-serving-cert\") pod \"511cc5d6-eccf-443e-92c3-8b0af97b904a\" (UID: \"511cc5d6-eccf-443e-92c3-8b0af97b904a\") " Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.401872 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zrbl\" (UniqueName: \"kubernetes.io/projected/033e773c-398b-493a-92aa-464307b11906-kube-api-access-4zrbl\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.401890 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033e773c-398b-493a-92aa-464307b11906-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.402569 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "511cc5d6-eccf-443e-92c3-8b0af97b904a" (UID: "511cc5d6-eccf-443e-92c3-8b0af97b904a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.402989 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-service-ca" (OuterVolumeSpecName: "service-ca") pod "511cc5d6-eccf-443e-92c3-8b0af97b904a" (UID: "511cc5d6-eccf-443e-92c3-8b0af97b904a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.403145 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-config" (OuterVolumeSpecName: "console-config") pod "511cc5d6-eccf-443e-92c3-8b0af97b904a" (UID: "511cc5d6-eccf-443e-92c3-8b0af97b904a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.403166 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "511cc5d6-eccf-443e-92c3-8b0af97b904a" (UID: "511cc5d6-eccf-443e-92c3-8b0af97b904a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.408089 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "511cc5d6-eccf-443e-92c3-8b0af97b904a" (UID: "511cc5d6-eccf-443e-92c3-8b0af97b904a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.409716 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511cc5d6-eccf-443e-92c3-8b0af97b904a-kube-api-access-mz6qh" (OuterVolumeSpecName: "kube-api-access-mz6qh") pod "511cc5d6-eccf-443e-92c3-8b0af97b904a" (UID: "511cc5d6-eccf-443e-92c3-8b0af97b904a"). InnerVolumeSpecName "kube-api-access-mz6qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.410421 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "511cc5d6-eccf-443e-92c3-8b0af97b904a" (UID: "511cc5d6-eccf-443e-92c3-8b0af97b904a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.503310 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.503341 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.503353 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.503363 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/511cc5d6-eccf-443e-92c3-8b0af97b904a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.503372 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz6qh\" (UniqueName: \"kubernetes.io/projected/511cc5d6-eccf-443e-92c3-8b0af97b904a-kube-api-access-mz6qh\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.503381 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.503389 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/511cc5d6-eccf-443e-92c3-8b0af97b904a-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.655434 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-36df-account-create-update-qs6kt"] Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.689225 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.702276 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v2tq9"] Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.709245 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:35 crc kubenswrapper[4853]: E1209 17:19:35.709609 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 17:19:35 crc kubenswrapper[4853]: E1209 17:19:35.709724 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 17:19:35 crc kubenswrapper[4853]: E1209 17:19:35.709842 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift podName:2f6e868f-f4bc-42d3-bbe6-2a391e2b768d nodeName:}" failed. No retries permitted until 2025-12-09 17:19:43.709823603 +0000 UTC m=+1410.644562795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift") pod "swift-storage-0" (UID: "2f6e868f-f4bc-42d3-bbe6-2a391e2b768d") : configmap "swift-ring-files" not found Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.744478 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nq7h2"] Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.799393 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dv9px"] Dec 09 17:19:35 crc kubenswrapper[4853]: I1209 17:19:35.864466 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cff-account-create-update-nzq9s"] Dec 09 17:19:35 crc kubenswrapper[4853]: W1209 17:19:35.883871 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cf06379_f14c_4652_9768_459276512e7f.slice/crio-971958f0c0cdcf301df538e73c5589e8b84038f5c7ee1bd6c5daf7baf82e080d WatchSource:0}: Error finding container 971958f0c0cdcf301df538e73c5589e8b84038f5c7ee1bd6c5daf7baf82e080d: Status 404 returned error can't find the container with id 971958f0c0cdcf301df538e73c5589e8b84038f5c7ee1bd6c5daf7baf82e080d Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.143638 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v2tq9" event={"ID":"93e3401b-eae8-4c50-a73b-686525de14a2","Type":"ContainerStarted","Data":"3b4af3c1d8d8262677a112ae7bb7a898f43a2ae991cdbc0d29b8d205c536cdcb"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.145423 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-658f7d5d7b-lbkxw_511cc5d6-eccf-443e-92c3-8b0af97b904a/console/0.log" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.145491 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-658f7d5d7b-lbkxw" event={"ID":"511cc5d6-eccf-443e-92c3-8b0af97b904a","Type":"ContainerDied","Data":"aa18419ddd9d0765a08c0550dd9bfaca19e12ee2788b9dc80f58198a38115225"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.145537 4853 scope.go:117] "RemoveContainer" containerID="93272ba077b8ccfdf9f1ffda9af50df1405300d2633303a3a565e16f2de6ec4d" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.145548 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-658f7d5d7b-lbkxw" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.149920 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dv9px" event={"ID":"170cafdb-5ab6-47f8-ba66-0398e9ca3904","Type":"ContainerStarted","Data":"d3bf6d47b9579e8c73f721fbc39cfa67218de2ab2aff485cd76c37637cdab1ad"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.149963 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dv9px" event={"ID":"170cafdb-5ab6-47f8-ba66-0398e9ca3904","Type":"ContainerStarted","Data":"735ae79fc35fe37b16e39c1f7643de183fdc88d3ce185e9098710bf6c895f101"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.153121 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerStarted","Data":"4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.159984 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-36df-account-create-update-qs6kt" event={"ID":"d21c5479-b7f8-47f1-9503-ae75e212fe56","Type":"ContainerStarted","Data":"e4b4b4ebefa767939655f409111f5d21594b364b729ac15bce205a399a0f4fbe"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.160026 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-36df-account-create-update-qs6kt" event={"ID":"d21c5479-b7f8-47f1-9503-ae75e212fe56","Type":"ContainerStarted","Data":"f171d2f90b43d88c4954cfbb30fb515302841cfaa927676aff7bc61ad89cd9c8"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.178510 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nq7h2" event={"ID":"b9117d61-707d-4f83-9a09-eaf1c26c1b11","Type":"ContainerStarted","Data":"87c96d96b21183333ddbefe7f28755baf92d84f54db5b67f452b8b72439a4799"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.178567 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nq7h2" event={"ID":"b9117d61-707d-4f83-9a09-eaf1c26c1b11","Type":"ContainerStarted","Data":"97a5470a4e1f3a0602922660439d7b9fb0de6582825802331f20a388e4c6a205"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.181235 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-658f7d5d7b-lbkxw"] Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.193155 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-658f7d5d7b-lbkxw"] Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.195733 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-dv9px" podStartSLOduration=2.195706521 podStartE2EDuration="2.195706521s" podCreationTimestamp="2025-12-09 17:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:19:36.181782901 +0000 UTC m=+1403.116522083" watchObservedRunningTime="2025-12-09 17:19:36.195706521 +0000 UTC m=+1403.130445703" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.197201 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cff-account-create-update-nzq9s" event={"ID":"8cf06379-f14c-4652-9768-459276512e7f","Type":"ContainerStarted","Data":"61b393f420cfcd4fae0768f517883770b7a413242ee2a72404f9e7eaec7a8b07"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.197377 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cff-account-create-update-nzq9s" event={"ID":"8cf06379-f14c-4652-9768-459276512e7f","Type":"ContainerStarted","Data":"971958f0c0cdcf301df538e73c5589e8b84038f5c7ee1bd6c5daf7baf82e080d"} Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.205258 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-36df-account-create-update-qs6kt" podStartSLOduration=2.2052374869999998 podStartE2EDuration="2.205237487s" podCreationTimestamp="2025-12-09 17:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:19:36.197261534 +0000 UTC m=+1403.132000716" watchObservedRunningTime="2025-12-09 17:19:36.205237487 +0000 UTC m=+1403.139976669" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.227676 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-nq7h2" podStartSLOduration=2.227656864 podStartE2EDuration="2.227656864s" podCreationTimestamp="2025-12-09 17:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:19:36.218072507 +0000 UTC m=+1403.152811689" watchObservedRunningTime="2025-12-09 17:19:36.227656864 +0000 UTC m=+1403.162396046" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.247810 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6cff-account-create-update-nzq9s" podStartSLOduration=2.247791318 podStartE2EDuration="2.247791318s" podCreationTimestamp="2025-12-09 17:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:19:36.236230554 +0000 UTC m=+1403.170969736" watchObservedRunningTime="2025-12-09 17:19:36.247791318 +0000 UTC m=+1403.182530500" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.444573 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.789449 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jrd55"] Dec 09 17:19:36 crc kubenswrapper[4853]: E1209 17:19:36.789889 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b36e250-d75c-41bf-a8a6-51c84ec06406" containerName="mariadb-database-create" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.789901 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b36e250-d75c-41bf-a8a6-51c84ec06406" containerName="mariadb-database-create" Dec 09 17:19:36 crc kubenswrapper[4853]: E1209 17:19:36.789931 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033e773c-398b-493a-92aa-464307b11906" containerName="mariadb-account-create-update" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.789937 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="033e773c-398b-493a-92aa-464307b11906" containerName="mariadb-account-create-update" Dec 09 17:19:36 crc kubenswrapper[4853]: E1209 17:19:36.789956 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511cc5d6-eccf-443e-92c3-8b0af97b904a" containerName="console" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.789962 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="511cc5d6-eccf-443e-92c3-8b0af97b904a" containerName="console" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.790152 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b36e250-d75c-41bf-a8a6-51c84ec06406" containerName="mariadb-database-create" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.790163 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="033e773c-398b-493a-92aa-464307b11906" containerName="mariadb-account-create-update" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.790174 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="511cc5d6-eccf-443e-92c3-8b0af97b904a" containerName="console" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.790866 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.803182 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jrd55"] Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.933819 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wfxp\" (UniqueName: \"kubernetes.io/projected/ab2641ad-aacc-489a-b452-0751797303e0-kube-api-access-5wfxp\") pod \"mysqld-exporter-openstack-cell1-db-create-jrd55\" (UID: \"ab2641ad-aacc-489a-b452-0751797303e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:36 crc kubenswrapper[4853]: I1209 17:19:36.933871 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2641ad-aacc-489a-b452-0751797303e0-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-jrd55\" (UID: \"ab2641ad-aacc-489a-b452-0751797303e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.012048 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-a2c0-account-create-update-l856k"] Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.013966 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.016780 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.027419 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a2c0-account-create-update-l856k"] Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.035984 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wfxp\" (UniqueName: \"kubernetes.io/projected/ab2641ad-aacc-489a-b452-0751797303e0-kube-api-access-5wfxp\") pod \"mysqld-exporter-openstack-cell1-db-create-jrd55\" (UID: \"ab2641ad-aacc-489a-b452-0751797303e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.036271 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2641ad-aacc-489a-b452-0751797303e0-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-jrd55\" (UID: \"ab2641ad-aacc-489a-b452-0751797303e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.037287 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2641ad-aacc-489a-b452-0751797303e0-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-jrd55\" (UID: \"ab2641ad-aacc-489a-b452-0751797303e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.057226 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wfxp\" (UniqueName: \"kubernetes.io/projected/ab2641ad-aacc-489a-b452-0751797303e0-kube-api-access-5wfxp\") pod \"mysqld-exporter-openstack-cell1-db-create-jrd55\" (UID: \"ab2641ad-aacc-489a-b452-0751797303e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.113341 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.138768 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fab64e-a230-4cd5-a766-7d5668603181-operator-scripts\") pod \"mysqld-exporter-a2c0-account-create-update-l856k\" (UID: \"38fab64e-a230-4cd5-a766-7d5668603181\") " pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.138975 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/38fab64e-a230-4cd5-a766-7d5668603181-kube-api-access-nfrvh\") pod \"mysqld-exporter-a2c0-account-create-update-l856k\" (UID: \"38fab64e-a230-4cd5-a766-7d5668603181\") " pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.212235 4853 generic.go:334] "Generic (PLEG): container finished" podID="b9117d61-707d-4f83-9a09-eaf1c26c1b11" containerID="87c96d96b21183333ddbefe7f28755baf92d84f54db5b67f452b8b72439a4799" exitCode=0 Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.212289 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nq7h2" event={"ID":"b9117d61-707d-4f83-9a09-eaf1c26c1b11","Type":"ContainerDied","Data":"87c96d96b21183333ddbefe7f28755baf92d84f54db5b67f452b8b72439a4799"} Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.214676 4853 generic.go:334] "Generic (PLEG): container finished" podID="8cf06379-f14c-4652-9768-459276512e7f" containerID="61b393f420cfcd4fae0768f517883770b7a413242ee2a72404f9e7eaec7a8b07" exitCode=0 Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.214770 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cff-account-create-update-nzq9s" event={"ID":"8cf06379-f14c-4652-9768-459276512e7f","Type":"ContainerDied","Data":"61b393f420cfcd4fae0768f517883770b7a413242ee2a72404f9e7eaec7a8b07"} Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.219392 4853 generic.go:334] "Generic (PLEG): container finished" podID="170cafdb-5ab6-47f8-ba66-0398e9ca3904" containerID="d3bf6d47b9579e8c73f721fbc39cfa67218de2ab2aff485cd76c37637cdab1ad" exitCode=0 Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.219438 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dv9px" event={"ID":"170cafdb-5ab6-47f8-ba66-0398e9ca3904","Type":"ContainerDied","Data":"d3bf6d47b9579e8c73f721fbc39cfa67218de2ab2aff485cd76c37637cdab1ad"} Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.221177 4853 generic.go:334] "Generic (PLEG): container finished" podID="d21c5479-b7f8-47f1-9503-ae75e212fe56" containerID="e4b4b4ebefa767939655f409111f5d21594b364b729ac15bce205a399a0f4fbe" exitCode=0 Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.221215 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-36df-account-create-update-qs6kt" event={"ID":"d21c5479-b7f8-47f1-9503-ae75e212fe56","Type":"ContainerDied","Data":"e4b4b4ebefa767939655f409111f5d21594b364b729ac15bce205a399a0f4fbe"} Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.224115 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.243219 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/38fab64e-a230-4cd5-a766-7d5668603181-kube-api-access-nfrvh\") pod \"mysqld-exporter-a2c0-account-create-update-l856k\" (UID: \"38fab64e-a230-4cd5-a766-7d5668603181\") " pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.249793 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fab64e-a230-4cd5-a766-7d5668603181-operator-scripts\") pod \"mysqld-exporter-a2c0-account-create-update-l856k\" (UID: \"38fab64e-a230-4cd5-a766-7d5668603181\") " pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.251091 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fab64e-a230-4cd5-a766-7d5668603181-operator-scripts\") pod \"mysqld-exporter-a2c0-account-create-update-l856k\" (UID: \"38fab64e-a230-4cd5-a766-7d5668603181\") " pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.271180 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/38fab64e-a230-4cd5-a766-7d5668603181-kube-api-access-nfrvh\") pod \"mysqld-exporter-a2c0-account-create-update-l856k\" (UID: \"38fab64e-a230-4cd5-a766-7d5668603181\") " pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.336285 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.374960 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wz58m"] Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.378169 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" podUID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerName="dnsmasq-dns" containerID="cri-o://ee11d28451444e8e1d1d965743c94c769d26a065aed18448c3f99f995ba951b5" gracePeriod=10 Dec 09 17:19:37 crc kubenswrapper[4853]: I1209 17:19:37.616087 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511cc5d6-eccf-443e-92c3-8b0af97b904a" path="/var/lib/kubelet/pods/511cc5d6-eccf-443e-92c3-8b0af97b904a/volumes" Dec 09 17:19:38 crc kubenswrapper[4853]: I1209 17:19:38.236301 4853 generic.go:334] "Generic (PLEG): container finished" podID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerID="ee11d28451444e8e1d1d965743c94c769d26a065aed18448c3f99f995ba951b5" exitCode=0 Dec 09 17:19:38 crc kubenswrapper[4853]: I1209 17:19:38.236351 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" event={"ID":"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7","Type":"ContainerDied","Data":"ee11d28451444e8e1d1d965743c94c769d26a065aed18448c3f99f995ba951b5"} Dec 09 17:19:39 crc kubenswrapper[4853]: I1209 17:19:39.611652 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" podUID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.293110 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-vqpmd"] Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.295217 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.308868 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.309325 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2xs4c" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.319445 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vqpmd"] Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.418525 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-222gq\" (UniqueName: \"kubernetes.io/projected/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-kube-api-access-222gq\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.418724 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-combined-ca-bundle\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.418804 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-db-sync-config-data\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.418868 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-config-data\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.526243 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-combined-ca-bundle\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.526508 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-db-sync-config-data\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.526670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-config-data\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.526846 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-222gq\" (UniqueName: \"kubernetes.io/projected/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-kube-api-access-222gq\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.538980 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-combined-ca-bundle\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.542139 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-config-data\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.546808 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-db-sync-config-data\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.554628 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-222gq\" (UniqueName: \"kubernetes.io/projected/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-kube-api-access-222gq\") pod \"glance-db-sync-vqpmd\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:40 crc kubenswrapper[4853]: I1209 17:19:40.628518 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vqpmd" Dec 09 17:19:41 crc kubenswrapper[4853]: I1209 17:19:41.267531 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerStarted","Data":"02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a"} Dec 09 17:19:42 crc kubenswrapper[4853]: I1209 17:19:42.473866 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 09 17:19:43 crc kubenswrapper[4853]: I1209 17:19:43.789390 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:43 crc kubenswrapper[4853]: E1209 17:19:43.789662 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 17:19:43 crc kubenswrapper[4853]: E1209 17:19:43.790298 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 17:19:43 crc kubenswrapper[4853]: E1209 17:19:43.790413 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift podName:2f6e868f-f4bc-42d3-bbe6-2a391e2b768d nodeName:}" failed. No retries permitted until 2025-12-09 17:19:59.790396454 +0000 UTC m=+1426.725135626 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift") pod "swift-storage-0" (UID: "2f6e868f-f4bc-42d3-bbe6-2a391e2b768d") : configmap "swift-ring-files" not found Dec 09 17:19:44 crc kubenswrapper[4853]: I1209 17:19:44.611517 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" podUID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.199764 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.207052 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dv9px" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.220553 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.258647 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.265499 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.317379 4853 generic.go:334] "Generic (PLEG): container finished" podID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" containerID="9d3feb5a12e69400f9270b312552b50129b059d8c865fffbce24185943e545b6" exitCode=0 Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.317450 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96900f2e-a2ad-47fe-be9b-7b6a924ded82","Type":"ContainerDied","Data":"9d3feb5a12e69400f9270b312552b50129b059d8c865fffbce24185943e545b6"} Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.325279 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cff-account-create-update-nzq9s" event={"ID":"8cf06379-f14c-4652-9768-459276512e7f","Type":"ContainerDied","Data":"971958f0c0cdcf301df538e73c5589e8b84038f5c7ee1bd6c5daf7baf82e080d"} Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.325315 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="971958f0c0cdcf301df538e73c5589e8b84038f5c7ee1bd6c5daf7baf82e080d" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.325507 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cff-account-create-update-nzq9s" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.326483 4853 generic.go:334] "Generic (PLEG): container finished" podID="03a2cb4e-7efc-4040-a115-db55575800e5" containerID="21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4" exitCode=0 Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.326542 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03a2cb4e-7efc-4040-a115-db55575800e5","Type":"ContainerDied","Data":"21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4"} Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.333563 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dv9px" event={"ID":"170cafdb-5ab6-47f8-ba66-0398e9ca3904","Type":"ContainerDied","Data":"735ae79fc35fe37b16e39c1f7643de183fdc88d3ce185e9098710bf6c895f101"} Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.333653 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735ae79fc35fe37b16e39c1f7643de183fdc88d3ce185e9098710bf6c895f101" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.333703 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dv9px" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.341302 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-36df-account-create-update-qs6kt" event={"ID":"d21c5479-b7f8-47f1-9503-ae75e212fe56","Type":"ContainerDied","Data":"f171d2f90b43d88c4954cfbb30fb515302841cfaa927676aff7bc61ad89cd9c8"} Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.341368 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f171d2f90b43d88c4954cfbb30fb515302841cfaa927676aff7bc61ad89cd9c8" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.341452 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-36df-account-create-update-qs6kt" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.343675 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtmrv\" (UniqueName: \"kubernetes.io/projected/b9117d61-707d-4f83-9a09-eaf1c26c1b11-kube-api-access-jtmrv\") pod \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\" (UID: \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.343769 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21c5479-b7f8-47f1-9503-ae75e212fe56-operator-scripts\") pod \"d21c5479-b7f8-47f1-9503-ae75e212fe56\" (UID: \"d21c5479-b7f8-47f1-9503-ae75e212fe56\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.343934 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fhmq\" (UniqueName: \"kubernetes.io/projected/d21c5479-b7f8-47f1-9503-ae75e212fe56-kube-api-access-6fhmq\") pod \"d21c5479-b7f8-47f1-9503-ae75e212fe56\" (UID: \"d21c5479-b7f8-47f1-9503-ae75e212fe56\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.343954 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9117d61-707d-4f83-9a09-eaf1c26c1b11-operator-scripts\") pod \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\" (UID: \"b9117d61-707d-4f83-9a09-eaf1c26c1b11\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.343983 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtvgs\" (UniqueName: \"kubernetes.io/projected/170cafdb-5ab6-47f8-ba66-0398e9ca3904-kube-api-access-mtvgs\") pod \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\" (UID: \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.344001 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170cafdb-5ab6-47f8-ba66-0398e9ca3904-operator-scripts\") pod \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\" (UID: \"170cafdb-5ab6-47f8-ba66-0398e9ca3904\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.344821 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170cafdb-5ab6-47f8-ba66-0398e9ca3904-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "170cafdb-5ab6-47f8-ba66-0398e9ca3904" (UID: "170cafdb-5ab6-47f8-ba66-0398e9ca3904"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.345030 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9117d61-707d-4f83-9a09-eaf1c26c1b11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9117d61-707d-4f83-9a09-eaf1c26c1b11" (UID: "b9117d61-707d-4f83-9a09-eaf1c26c1b11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.345151 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d21c5479-b7f8-47f1-9503-ae75e212fe56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d21c5479-b7f8-47f1-9503-ae75e212fe56" (UID: "d21c5479-b7f8-47f1-9503-ae75e212fe56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.346228 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.346227 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wz58m" event={"ID":"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7","Type":"ContainerDied","Data":"2456817c268381079eeffbc72ad809c958ce8ae72c2d5e8d510ddec999e83eb9"} Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.346339 4853 scope.go:117] "RemoveContainer" containerID="ee11d28451444e8e1d1d965743c94c769d26a065aed18448c3f99f995ba951b5" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.348242 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9117d61-707d-4f83-9a09-eaf1c26c1b11-kube-api-access-jtmrv" (OuterVolumeSpecName: "kube-api-access-jtmrv") pod "b9117d61-707d-4f83-9a09-eaf1c26c1b11" (UID: "b9117d61-707d-4f83-9a09-eaf1c26c1b11"). InnerVolumeSpecName "kube-api-access-jtmrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.351746 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21c5479-b7f8-47f1-9503-ae75e212fe56-kube-api-access-6fhmq" (OuterVolumeSpecName: "kube-api-access-6fhmq") pod "d21c5479-b7f8-47f1-9503-ae75e212fe56" (UID: "d21c5479-b7f8-47f1-9503-ae75e212fe56"). InnerVolumeSpecName "kube-api-access-6fhmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.352662 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170cafdb-5ab6-47f8-ba66-0398e9ca3904-kube-api-access-mtvgs" (OuterVolumeSpecName: "kube-api-access-mtvgs") pod "170cafdb-5ab6-47f8-ba66-0398e9ca3904" (UID: "170cafdb-5ab6-47f8-ba66-0398e9ca3904"). InnerVolumeSpecName "kube-api-access-mtvgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.355788 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nq7h2" event={"ID":"b9117d61-707d-4f83-9a09-eaf1c26c1b11","Type":"ContainerDied","Data":"97a5470a4e1f3a0602922660439d7b9fb0de6582825802331f20a388e4c6a205"} Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.355824 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97a5470a4e1f3a0602922660439d7b9fb0de6582825802331f20a388e4c6a205" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.355918 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nq7h2" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.384401 4853 scope.go:117] "RemoveContainer" containerID="420351ec532ba770033973beec7423451fe4d8ae13d33c06e1d753b6cf88dde8" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.442305 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jrd55"] Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.446240 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vc42\" (UniqueName: \"kubernetes.io/projected/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-kube-api-access-2vc42\") pod \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.446455 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-dns-svc\") pod \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.446564 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmn4j\" (UniqueName: \"kubernetes.io/projected/8cf06379-f14c-4652-9768-459276512e7f-kube-api-access-vmn4j\") pod \"8cf06379-f14c-4652-9768-459276512e7f\" (UID: \"8cf06379-f14c-4652-9768-459276512e7f\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.446647 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-sb\") pod \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.446696 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-nb\") pod \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.446730 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cf06379-f14c-4652-9768-459276512e7f-operator-scripts\") pod \"8cf06379-f14c-4652-9768-459276512e7f\" (UID: \"8cf06379-f14c-4652-9768-459276512e7f\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.446828 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-config\") pod \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\" (UID: \"54cdd066-306e-4bbb-8dbc-7b01cf8f32f7\") " Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.447413 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fhmq\" (UniqueName: \"kubernetes.io/projected/d21c5479-b7f8-47f1-9503-ae75e212fe56-kube-api-access-6fhmq\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.447429 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9117d61-707d-4f83-9a09-eaf1c26c1b11-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.447440 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtvgs\" (UniqueName: \"kubernetes.io/projected/170cafdb-5ab6-47f8-ba66-0398e9ca3904-kube-api-access-mtvgs\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.447449 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170cafdb-5ab6-47f8-ba66-0398e9ca3904-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.447458 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtmrv\" (UniqueName: \"kubernetes.io/projected/b9117d61-707d-4f83-9a09-eaf1c26c1b11-kube-api-access-jtmrv\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.447467 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21c5479-b7f8-47f1-9503-ae75e212fe56-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.451961 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cf06379-f14c-4652-9768-459276512e7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8cf06379-f14c-4652-9768-459276512e7f" (UID: "8cf06379-f14c-4652-9768-459276512e7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.459264 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf06379-f14c-4652-9768-459276512e7f-kube-api-access-vmn4j" (OuterVolumeSpecName: "kube-api-access-vmn4j") pod "8cf06379-f14c-4652-9768-459276512e7f" (UID: "8cf06379-f14c-4652-9768-459276512e7f"). InnerVolumeSpecName "kube-api-access-vmn4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.459793 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-kube-api-access-2vc42" (OuterVolumeSpecName: "kube-api-access-2vc42") pod "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" (UID: "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7"). InnerVolumeSpecName "kube-api-access-2vc42". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: W1209 17:19:45.495688 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab2641ad_aacc_489a_b452_0751797303e0.slice/crio-f86e932e78654abfd503aac11831f6a9cf83b92063fc4f374c11776e6c6eac06 WatchSource:0}: Error finding container f86e932e78654abfd503aac11831f6a9cf83b92063fc4f374c11776e6c6eac06: Status 404 returned error can't find the container with id f86e932e78654abfd503aac11831f6a9cf83b92063fc4f374c11776e6c6eac06 Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.531284 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vqpmd"] Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.551952 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cf06379-f14c-4652-9768-459276512e7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.551995 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vc42\" (UniqueName: \"kubernetes.io/projected/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-kube-api-access-2vc42\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.552008 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmn4j\" (UniqueName: \"kubernetes.io/projected/8cf06379-f14c-4652-9768-459276512e7f-kube-api-access-vmn4j\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.556249 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" (UID: "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.562162 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-k9tc2" podUID="e9047f51-9852-47e3-bc10-649c8d638054" containerName="ovn-controller" probeResult="failure" output=< Dec 09 17:19:45 crc kubenswrapper[4853]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 17:19:45 crc kubenswrapper[4853]: > Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.565509 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" (UID: "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.567995 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-config" (OuterVolumeSpecName: "config") pod "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" (UID: "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.568058 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" (UID: "54cdd066-306e-4bbb-8dbc-7b01cf8f32f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.616249 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a2c0-account-create-update-l856k"] Dec 09 17:19:45 crc kubenswrapper[4853]: W1209 17:19:45.617791 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38fab64e_a230_4cd5_a766_7d5668603181.slice/crio-c370f994af1bf2323a66f1bea626aed810efc55c0a5430fdf60f740cd8f9c1bb WatchSource:0}: Error finding container c370f994af1bf2323a66f1bea626aed810efc55c0a5430fdf60f740cd8f9c1bb: Status 404 returned error can't find the container with id c370f994af1bf2323a66f1bea626aed810efc55c0a5430fdf60f740cd8f9c1bb Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.653967 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.655295 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.655314 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.655323 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.655332 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.698026 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-twpcm" Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.789385 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wz58m"] Dec 09 17:19:45 crc kubenswrapper[4853]: I1209 17:19:45.836925 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wz58m"] Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.104498 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k9tc2-config-5gxt6"] Dec 09 17:19:46 crc kubenswrapper[4853]: E1209 17:19:46.108198 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf06379-f14c-4652-9768-459276512e7f" containerName="mariadb-account-create-update" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.108385 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf06379-f14c-4652-9768-459276512e7f" containerName="mariadb-account-create-update" Dec 09 17:19:46 crc kubenswrapper[4853]: E1209 17:19:46.108488 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9117d61-707d-4f83-9a09-eaf1c26c1b11" containerName="mariadb-database-create" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.108546 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9117d61-707d-4f83-9a09-eaf1c26c1b11" containerName="mariadb-database-create" Dec 09 17:19:46 crc kubenswrapper[4853]: E1209 17:19:46.108636 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170cafdb-5ab6-47f8-ba66-0398e9ca3904" containerName="mariadb-database-create" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.108702 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="170cafdb-5ab6-47f8-ba66-0398e9ca3904" containerName="mariadb-database-create" Dec 09 17:19:46 crc kubenswrapper[4853]: E1209 17:19:46.108767 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerName="init" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.108824 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerName="init" Dec 09 17:19:46 crc kubenswrapper[4853]: E1209 17:19:46.108894 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21c5479-b7f8-47f1-9503-ae75e212fe56" containerName="mariadb-account-create-update" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.108984 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21c5479-b7f8-47f1-9503-ae75e212fe56" containerName="mariadb-account-create-update" Dec 09 17:19:46 crc kubenswrapper[4853]: E1209 17:19:46.109045 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerName="dnsmasq-dns" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.109139 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerName="dnsmasq-dns" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.109426 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21c5479-b7f8-47f1-9503-ae75e212fe56" containerName="mariadb-account-create-update" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.109493 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf06379-f14c-4652-9768-459276512e7f" containerName="mariadb-account-create-update" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.109575 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9117d61-707d-4f83-9a09-eaf1c26c1b11" containerName="mariadb-database-create" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.109653 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="170cafdb-5ab6-47f8-ba66-0398e9ca3904" containerName="mariadb-database-create" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.109731 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" containerName="dnsmasq-dns" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.110537 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.120354 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.136770 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9tc2-config-5gxt6"] Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.270237 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8w7\" (UniqueName: \"kubernetes.io/projected/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-kube-api-access-6k8w7\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.270396 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run-ovn\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.270433 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.270502 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-additional-scripts\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.270614 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-scripts\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.270699 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-log-ovn\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.368818 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96900f2e-a2ad-47fe-be9b-7b6a924ded82","Type":"ContainerStarted","Data":"ac0fd5259f9efa3d8d6a09fb258b3fcb49f0c6f25ce4ec2dbb972cd65109ec37"} Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.370207 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.372235 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-scripts\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.372326 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-log-ovn\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.372375 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8w7\" (UniqueName: \"kubernetes.io/projected/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-kube-api-access-6k8w7\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.372450 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run-ovn\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.372476 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.372520 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-additional-scripts\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.373703 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-additional-scripts\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.373702 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v2tq9" event={"ID":"93e3401b-eae8-4c50-a73b-686525de14a2","Type":"ContainerStarted","Data":"4a063e68ccfefebd54e0fa57c638e20fdbf5f2e4f2d3f4f1bc4f927a1a546ab8"} Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.373925 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run-ovn\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.373966 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.374367 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-log-ovn\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.374834 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-scripts\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.376303 4853 generic.go:334] "Generic (PLEG): container finished" podID="ab2641ad-aacc-489a-b452-0751797303e0" containerID="9b28d6a2316fdfede38c842c97fcecd17b12a72d983339b2e01c0b08d05193c8" exitCode=0 Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.376356 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" event={"ID":"ab2641ad-aacc-489a-b452-0751797303e0","Type":"ContainerDied","Data":"9b28d6a2316fdfede38c842c97fcecd17b12a72d983339b2e01c0b08d05193c8"} Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.376380 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" event={"ID":"ab2641ad-aacc-489a-b452-0751797303e0","Type":"ContainerStarted","Data":"f86e932e78654abfd503aac11831f6a9cf83b92063fc4f374c11776e6c6eac06"} Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.377655 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vqpmd" event={"ID":"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f","Type":"ContainerStarted","Data":"2279c91d7f7abb56d0f4a78823de30913d90437d3178ef79b26a542893f84265"} Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.380057 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03a2cb4e-7efc-4040-a115-db55575800e5","Type":"ContainerStarted","Data":"8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b"} Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.380327 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.381748 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" event={"ID":"38fab64e-a230-4cd5-a766-7d5668603181","Type":"ContainerStarted","Data":"fb54a58d8e849681e3a0d3b9ee770ea0780a9ef0ed267397635af7ffa856a5bd"} Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.381781 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" event={"ID":"38fab64e-a230-4cd5-a766-7d5668603181","Type":"ContainerStarted","Data":"c370f994af1bf2323a66f1bea626aed810efc55c0a5430fdf60f740cd8f9c1bb"} Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.395295 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.850290606 podStartE2EDuration="1m6.39526296s" podCreationTimestamp="2025-12-09 17:18:40 +0000 UTC" firstStartedPulling="2025-12-09 17:18:56.851046874 +0000 UTC m=+1363.785786056" lastFinishedPulling="2025-12-09 17:19:09.396019208 +0000 UTC m=+1376.330758410" observedRunningTime="2025-12-09 17:19:46.389018214 +0000 UTC m=+1413.323757416" watchObservedRunningTime="2025-12-09 17:19:46.39526296 +0000 UTC m=+1413.330002162" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.411354 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8w7\" (UniqueName: \"kubernetes.io/projected/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-kube-api-access-6k8w7\") pod \"ovn-controller-k9tc2-config-5gxt6\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.428219 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.445055 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-v2tq9" podStartSLOduration=6.221846426 podStartE2EDuration="15.445036912s" podCreationTimestamp="2025-12-09 17:19:31 +0000 UTC" firstStartedPulling="2025-12-09 17:19:35.689016531 +0000 UTC m=+1402.623755713" lastFinishedPulling="2025-12-09 17:19:44.912207017 +0000 UTC m=+1411.846946199" observedRunningTime="2025-12-09 17:19:46.44496803 +0000 UTC m=+1413.379707212" watchObservedRunningTime="2025-12-09 17:19:46.445036912 +0000 UTC m=+1413.379776094" Dec 09 17:19:46 crc kubenswrapper[4853]: I1209 17:19:46.468244 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.916124008 podStartE2EDuration="1m6.468221141s" podCreationTimestamp="2025-12-09 17:18:40 +0000 UTC" firstStartedPulling="2025-12-09 17:18:56.401046121 +0000 UTC m=+1363.335785303" lastFinishedPulling="2025-12-09 17:19:08.953143254 +0000 UTC m=+1375.887882436" observedRunningTime="2025-12-09 17:19:46.464573599 +0000 UTC m=+1413.399312791" watchObservedRunningTime="2025-12-09 17:19:46.468221141 +0000 UTC m=+1413.402960323" Dec 09 17:19:47 crc kubenswrapper[4853]: I1209 17:19:47.395367 4853 generic.go:334] "Generic (PLEG): container finished" podID="38fab64e-a230-4cd5-a766-7d5668603181" containerID="fb54a58d8e849681e3a0d3b9ee770ea0780a9ef0ed267397635af7ffa856a5bd" exitCode=0 Dec 09 17:19:47 crc kubenswrapper[4853]: I1209 17:19:47.395486 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" event={"ID":"38fab64e-a230-4cd5-a766-7d5668603181","Type":"ContainerDied","Data":"fb54a58d8e849681e3a0d3b9ee770ea0780a9ef0ed267397635af7ffa856a5bd"} Dec 09 17:19:47 crc kubenswrapper[4853]: I1209 17:19:47.598849 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54cdd066-306e-4bbb-8dbc-7b01cf8f32f7" path="/var/lib/kubelet/pods/54cdd066-306e-4bbb-8dbc-7b01cf8f32f7/volumes" Dec 09 17:19:48 crc kubenswrapper[4853]: I1209 17:19:48.417428 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k9tc2-config-5gxt6"] Dec 09 17:19:49 crc kubenswrapper[4853]: W1209 17:19:49.091517 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d868a5_7e55_4896_b56d_4c57d9b3cb8f.slice/crio-1de236ba09c6c259e20897d477142b0292ea89726fd21cdf74d31f953ac90871 WatchSource:0}: Error finding container 1de236ba09c6c259e20897d477142b0292ea89726fd21cdf74d31f953ac90871: Status 404 returned error can't find the container with id 1de236ba09c6c259e20897d477142b0292ea89726fd21cdf74d31f953ac90871 Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.220149 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.223510 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.251014 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fab64e-a230-4cd5-a766-7d5668603181-operator-scripts\") pod \"38fab64e-a230-4cd5-a766-7d5668603181\" (UID: \"38fab64e-a230-4cd5-a766-7d5668603181\") " Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.251117 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/38fab64e-a230-4cd5-a766-7d5668603181-kube-api-access-nfrvh\") pod \"38fab64e-a230-4cd5-a766-7d5668603181\" (UID: \"38fab64e-a230-4cd5-a766-7d5668603181\") " Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.251187 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wfxp\" (UniqueName: \"kubernetes.io/projected/ab2641ad-aacc-489a-b452-0751797303e0-kube-api-access-5wfxp\") pod \"ab2641ad-aacc-489a-b452-0751797303e0\" (UID: \"ab2641ad-aacc-489a-b452-0751797303e0\") " Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.251240 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2641ad-aacc-489a-b452-0751797303e0-operator-scripts\") pod \"ab2641ad-aacc-489a-b452-0751797303e0\" (UID: \"ab2641ad-aacc-489a-b452-0751797303e0\") " Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.252459 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2641ad-aacc-489a-b452-0751797303e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab2641ad-aacc-489a-b452-0751797303e0" (UID: "ab2641ad-aacc-489a-b452-0751797303e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.255555 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fab64e-a230-4cd5-a766-7d5668603181-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38fab64e-a230-4cd5-a766-7d5668603181" (UID: "38fab64e-a230-4cd5-a766-7d5668603181"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.271911 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2641ad-aacc-489a-b452-0751797303e0-kube-api-access-5wfxp" (OuterVolumeSpecName: "kube-api-access-5wfxp") pod "ab2641ad-aacc-489a-b452-0751797303e0" (UID: "ab2641ad-aacc-489a-b452-0751797303e0"). InnerVolumeSpecName "kube-api-access-5wfxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.274049 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fab64e-a230-4cd5-a766-7d5668603181-kube-api-access-nfrvh" (OuterVolumeSpecName: "kube-api-access-nfrvh") pod "38fab64e-a230-4cd5-a766-7d5668603181" (UID: "38fab64e-a230-4cd5-a766-7d5668603181"). InnerVolumeSpecName "kube-api-access-nfrvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.353867 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2641ad-aacc-489a-b452-0751797303e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.354180 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fab64e-a230-4cd5-a766-7d5668603181-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.354190 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfrvh\" (UniqueName: \"kubernetes.io/projected/38fab64e-a230-4cd5-a766-7d5668603181-kube-api-access-nfrvh\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.354203 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wfxp\" (UniqueName: \"kubernetes.io/projected/ab2641ad-aacc-489a-b452-0751797303e0-kube-api-access-5wfxp\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.432778 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9tc2-config-5gxt6" event={"ID":"97d868a5-7e55-4896-b56d-4c57d9b3cb8f","Type":"ContainerStarted","Data":"1de236ba09c6c259e20897d477142b0292ea89726fd21cdf74d31f953ac90871"} Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.462243 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" event={"ID":"ab2641ad-aacc-489a-b452-0751797303e0","Type":"ContainerDied","Data":"f86e932e78654abfd503aac11831f6a9cf83b92063fc4f374c11776e6c6eac06"} Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.462282 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86e932e78654abfd503aac11831f6a9cf83b92063fc4f374c11776e6c6eac06" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.462341 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-jrd55" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.497379 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" event={"ID":"38fab64e-a230-4cd5-a766-7d5668603181","Type":"ContainerDied","Data":"c370f994af1bf2323a66f1bea626aed810efc55c0a5430fdf60f740cd8f9c1bb"} Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.497428 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c370f994af1bf2323a66f1bea626aed810efc55c0a5430fdf60f740cd8f9c1bb" Dec 09 17:19:49 crc kubenswrapper[4853]: I1209 17:19:49.497501 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a2c0-account-create-update-l856k" Dec 09 17:19:50 crc kubenswrapper[4853]: I1209 17:19:50.501286 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-k9tc2" Dec 09 17:19:50 crc kubenswrapper[4853]: I1209 17:19:50.509956 4853 generic.go:334] "Generic (PLEG): container finished" podID="97d868a5-7e55-4896-b56d-4c57d9b3cb8f" containerID="3aa945de9f40ff01bea42457b7153a08d0589b5c3a487bd27c8124f884a6b089" exitCode=0 Dec 09 17:19:50 crc kubenswrapper[4853]: I1209 17:19:50.510250 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9tc2-config-5gxt6" event={"ID":"97d868a5-7e55-4896-b56d-4c57d9b3cb8f","Type":"ContainerDied","Data":"3aa945de9f40ff01bea42457b7153a08d0589b5c3a487bd27c8124f884a6b089"} Dec 09 17:19:50 crc kubenswrapper[4853]: I1209 17:19:50.514839 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerStarted","Data":"5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531"} Dec 09 17:19:50 crc kubenswrapper[4853]: I1209 17:19:50.555001 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=12.811198509 podStartE2EDuration="1m4.554980347s" podCreationTimestamp="2025-12-09 17:18:46 +0000 UTC" firstStartedPulling="2025-12-09 17:18:57.448863384 +0000 UTC m=+1364.383602566" lastFinishedPulling="2025-12-09 17:19:49.192645222 +0000 UTC m=+1416.127384404" observedRunningTime="2025-12-09 17:19:50.549948916 +0000 UTC m=+1417.484688108" watchObservedRunningTime="2025-12-09 17:19:50.554980347 +0000 UTC m=+1417.489719519" Dec 09 17:19:51 crc kubenswrapper[4853]: I1209 17:19:51.954753 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.020154 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run-ovn\") pod \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.020277 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "97d868a5-7e55-4896-b56d-4c57d9b3cb8f" (UID: "97d868a5-7e55-4896-b56d-4c57d9b3cb8f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.020626 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run\") pod \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.020663 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run" (OuterVolumeSpecName: "var-run") pod "97d868a5-7e55-4896-b56d-4c57d9b3cb8f" (UID: "97d868a5-7e55-4896-b56d-4c57d9b3cb8f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.020897 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "97d868a5-7e55-4896-b56d-4c57d9b3cb8f" (UID: "97d868a5-7e55-4896-b56d-4c57d9b3cb8f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.020946 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-log-ovn\") pod \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.021048 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-additional-scripts\") pod \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.021973 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "97d868a5-7e55-4896-b56d-4c57d9b3cb8f" (UID: "97d868a5-7e55-4896-b56d-4c57d9b3cb8f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.023138 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-scripts" (OuterVolumeSpecName: "scripts") pod "97d868a5-7e55-4896-b56d-4c57d9b3cb8f" (UID: "97d868a5-7e55-4896-b56d-4c57d9b3cb8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.023212 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-scripts\") pod \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.023260 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8w7\" (UniqueName: \"kubernetes.io/projected/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-kube-api-access-6k8w7\") pod \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\" (UID: \"97d868a5-7e55-4896-b56d-4c57d9b3cb8f\") " Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.024742 4853 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.024775 4853 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.024789 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.024801 4853 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.024812 4853 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-var-run\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.029429 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-kube-api-access-6k8w7" (OuterVolumeSpecName: "kube-api-access-6k8w7") pod "97d868a5-7e55-4896-b56d-4c57d9b3cb8f" (UID: "97d868a5-7e55-4896-b56d-4c57d9b3cb8f"). InnerVolumeSpecName "kube-api-access-6k8w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.128272 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8w7\" (UniqueName: \"kubernetes.io/projected/97d868a5-7e55-4896-b56d-4c57d9b3cb8f-kube-api-access-6k8w7\") on node \"crc\" DevicePath \"\"" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.219165 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 17:19:52 crc kubenswrapper[4853]: E1209 17:19:52.219641 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fab64e-a230-4cd5-a766-7d5668603181" containerName="mariadb-account-create-update" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.219687 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fab64e-a230-4cd5-a766-7d5668603181" containerName="mariadb-account-create-update" Dec 09 17:19:52 crc kubenswrapper[4853]: E1209 17:19:52.219707 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d868a5-7e55-4896-b56d-4c57d9b3cb8f" containerName="ovn-config" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.219714 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d868a5-7e55-4896-b56d-4c57d9b3cb8f" containerName="ovn-config" Dec 09 17:19:52 crc kubenswrapper[4853]: E1209 17:19:52.219732 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2641ad-aacc-489a-b452-0751797303e0" containerName="mariadb-database-create" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.219743 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2641ad-aacc-489a-b452-0751797303e0" containerName="mariadb-database-create" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.219982 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d868a5-7e55-4896-b56d-4c57d9b3cb8f" containerName="ovn-config" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.220004 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2641ad-aacc-489a-b452-0751797303e0" containerName="mariadb-database-create" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.220027 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fab64e-a230-4cd5-a766-7d5668603181" containerName="mariadb-account-create-update" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.220982 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.232551 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.251037 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.340836 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-config-data\") pod \"mysqld-exporter-0\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.341059 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t59dq\" (UniqueName: \"kubernetes.io/projected/62a55e7b-220c-44a9-acdd-ad06588c155e-kube-api-access-t59dq\") pod \"mysqld-exporter-0\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.341246 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.443504 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-config-data\") pod \"mysqld-exporter-0\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.443657 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t59dq\" (UniqueName: \"kubernetes.io/projected/62a55e7b-220c-44a9-acdd-ad06588c155e-kube-api-access-t59dq\") pod \"mysqld-exporter-0\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.443721 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.460572 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.478226 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-config-data\") pod \"mysqld-exporter-0\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.498433 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t59dq\" (UniqueName: \"kubernetes.io/projected/62a55e7b-220c-44a9-acdd-ad06588c155e-kube-api-access-t59dq\") pod \"mysqld-exporter-0\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.548323 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.562314 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k9tc2-config-5gxt6" event={"ID":"97d868a5-7e55-4896-b56d-4c57d9b3cb8f","Type":"ContainerDied","Data":"1de236ba09c6c259e20897d477142b0292ea89726fd21cdf74d31f953ac90871"} Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.562374 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de236ba09c6c259e20897d477142b0292ea89726fd21cdf74d31f953ac90871" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.562460 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k9tc2-config-5gxt6" Dec 09 17:19:52 crc kubenswrapper[4853]: I1209 17:19:52.854873 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 17:19:53 crc kubenswrapper[4853]: I1209 17:19:53.051424 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 17:19:53 crc kubenswrapper[4853]: W1209 17:19:53.058493 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62a55e7b_220c_44a9_acdd_ad06588c155e.slice/crio-76886e3a6dcf2128f962e37bf5cadf5c2d2de317a80a3295f203ed22ea9ace1f WatchSource:0}: Error finding container 76886e3a6dcf2128f962e37bf5cadf5c2d2de317a80a3295f203ed22ea9ace1f: Status 404 returned error can't find the container with id 76886e3a6dcf2128f962e37bf5cadf5c2d2de317a80a3295f203ed22ea9ace1f Dec 09 17:19:53 crc kubenswrapper[4853]: I1209 17:19:53.076495 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-k9tc2-config-5gxt6"] Dec 09 17:19:53 crc kubenswrapper[4853]: I1209 17:19:53.091894 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-k9tc2-config-5gxt6"] Dec 09 17:19:53 crc kubenswrapper[4853]: I1209 17:19:53.574405 4853 generic.go:334] "Generic (PLEG): container finished" podID="93e3401b-eae8-4c50-a73b-686525de14a2" containerID="4a063e68ccfefebd54e0fa57c638e20fdbf5f2e4f2d3f4f1bc4f927a1a546ab8" exitCode=0 Dec 09 17:19:53 crc kubenswrapper[4853]: I1209 17:19:53.580233 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d868a5-7e55-4896-b56d-4c57d9b3cb8f" path="/var/lib/kubelet/pods/97d868a5-7e55-4896-b56d-4c57d9b3cb8f/volumes" Dec 09 17:19:53 crc kubenswrapper[4853]: I1209 17:19:53.581098 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v2tq9" event={"ID":"93e3401b-eae8-4c50-a73b-686525de14a2","Type":"ContainerDied","Data":"4a063e68ccfefebd54e0fa57c638e20fdbf5f2e4f2d3f4f1bc4f927a1a546ab8"} Dec 09 17:19:53 crc kubenswrapper[4853]: I1209 17:19:53.581149 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"62a55e7b-220c-44a9-acdd-ad06588c155e","Type":"ContainerStarted","Data":"76886e3a6dcf2128f962e37bf5cadf5c2d2de317a80a3295f203ed22ea9ace1f"} Dec 09 17:19:58 crc kubenswrapper[4853]: I1209 17:19:58.594888 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:19:58 crc kubenswrapper[4853]: I1209 17:19:58.595734 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:19:59 crc kubenswrapper[4853]: I1209 17:19:59.806244 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:59 crc kubenswrapper[4853]: I1209 17:19:59.813456 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f6e868f-f4bc-42d3-bbe6-2a391e2b768d-etc-swift\") pod \"swift-storage-0\" (UID: \"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d\") " pod="openstack/swift-storage-0" Dec 09 17:19:59 crc kubenswrapper[4853]: I1209 17:19:59.924007 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 17:20:01 crc kubenswrapper[4853]: I1209 17:20:01.507901 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 17:20:01 crc kubenswrapper[4853]: I1209 17:20:01.824860 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.034400 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wpnd5"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.036409 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.059478 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wpnd5"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.123405 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rhs9j"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.124969 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.148698 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4186-account-create-update-cdtpf"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.150191 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.154555 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.174561 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rhs9j"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.208201 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4186-account-create-update-cdtpf"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.208416 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e5f07-60db-4bae-9f04-8c5915067796-operator-scripts\") pod \"barbican-db-create-rhs9j\" (UID: \"4c6e5f07-60db-4bae-9f04-8c5915067796\") " pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.208491 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nj98\" (UniqueName: \"kubernetes.io/projected/99cf6f02-4548-4822-8cea-219f8f35db7d-kube-api-access-7nj98\") pod \"cinder-db-create-wpnd5\" (UID: \"99cf6f02-4548-4822-8cea-219f8f35db7d\") " pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.208533 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99cf6f02-4548-4822-8cea-219f8f35db7d-operator-scripts\") pod \"cinder-db-create-wpnd5\" (UID: \"99cf6f02-4548-4822-8cea-219f8f35db7d\") " pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.208562 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p54dw\" (UniqueName: \"kubernetes.io/projected/4c6e5f07-60db-4bae-9f04-8c5915067796-kube-api-access-p54dw\") pod \"barbican-db-create-rhs9j\" (UID: \"4c6e5f07-60db-4bae-9f04-8c5915067796\") " pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.230415 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-573f-account-create-update-gq8pp"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.232049 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.240921 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.246928 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-573f-account-create-update-gq8pp"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.311173 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e5f07-60db-4bae-9f04-8c5915067796-operator-scripts\") pod \"barbican-db-create-rhs9j\" (UID: \"4c6e5f07-60db-4bae-9f04-8c5915067796\") " pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.311228 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nj98\" (UniqueName: \"kubernetes.io/projected/99cf6f02-4548-4822-8cea-219f8f35db7d-kube-api-access-7nj98\") pod \"cinder-db-create-wpnd5\" (UID: \"99cf6f02-4548-4822-8cea-219f8f35db7d\") " pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.311254 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99cf6f02-4548-4822-8cea-219f8f35db7d-operator-scripts\") pod \"cinder-db-create-wpnd5\" (UID: \"99cf6f02-4548-4822-8cea-219f8f35db7d\") " pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.311274 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p54dw\" (UniqueName: \"kubernetes.io/projected/4c6e5f07-60db-4bae-9f04-8c5915067796-kube-api-access-p54dw\") pod \"barbican-db-create-rhs9j\" (UID: \"4c6e5f07-60db-4bae-9f04-8c5915067796\") " pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.311350 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knqc8\" (UniqueName: \"kubernetes.io/projected/9d2e1f43-e047-4825-9457-a3a9bcfba205-kube-api-access-knqc8\") pod \"barbican-4186-account-create-update-cdtpf\" (UID: \"9d2e1f43-e047-4825-9457-a3a9bcfba205\") " pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.311384 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2e1f43-e047-4825-9457-a3a9bcfba205-operator-scripts\") pod \"barbican-4186-account-create-update-cdtpf\" (UID: \"9d2e1f43-e047-4825-9457-a3a9bcfba205\") " pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.312136 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e5f07-60db-4bae-9f04-8c5915067796-operator-scripts\") pod \"barbican-db-create-rhs9j\" (UID: \"4c6e5f07-60db-4bae-9f04-8c5915067796\") " pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.312881 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99cf6f02-4548-4822-8cea-219f8f35db7d-operator-scripts\") pod \"cinder-db-create-wpnd5\" (UID: \"99cf6f02-4548-4822-8cea-219f8f35db7d\") " pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.330114 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-cxxpb"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.331895 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.356076 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nj98\" (UniqueName: \"kubernetes.io/projected/99cf6f02-4548-4822-8cea-219f8f35db7d-kube-api-access-7nj98\") pod \"cinder-db-create-wpnd5\" (UID: \"99cf6f02-4548-4822-8cea-219f8f35db7d\") " pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.358693 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-tvtzt"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.360523 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.365397 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.365490 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.365702 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.365769 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrrnz" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.369812 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.372314 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p54dw\" (UniqueName: \"kubernetes.io/projected/4c6e5f07-60db-4bae-9f04-8c5915067796-kube-api-access-p54dw\") pod \"barbican-db-create-rhs9j\" (UID: \"4c6e5f07-60db-4bae-9f04-8c5915067796\") " pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.391489 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-cxxpb"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.413828 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95dfl\" (UniqueName: \"kubernetes.io/projected/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-kube-api-access-95dfl\") pod \"heat-db-create-cxxpb\" (UID: \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\") " pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.413906 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5h4\" (UniqueName: \"kubernetes.io/projected/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-kube-api-access-qd5h4\") pod \"cinder-573f-account-create-update-gq8pp\" (UID: \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\") " pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.414040 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knqc8\" (UniqueName: \"kubernetes.io/projected/9d2e1f43-e047-4825-9457-a3a9bcfba205-kube-api-access-knqc8\") pod \"barbican-4186-account-create-update-cdtpf\" (UID: \"9d2e1f43-e047-4825-9457-a3a9bcfba205\") " pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.414081 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2e1f43-e047-4825-9457-a3a9bcfba205-operator-scripts\") pod \"barbican-4186-account-create-update-cdtpf\" (UID: \"9d2e1f43-e047-4825-9457-a3a9bcfba205\") " pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.414114 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-operator-scripts\") pod \"cinder-573f-account-create-update-gq8pp\" (UID: \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\") " pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.414181 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-operator-scripts\") pod \"heat-db-create-cxxpb\" (UID: \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\") " pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.433365 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tvtzt"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.433387 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2e1f43-e047-4825-9457-a3a9bcfba205-operator-scripts\") pod \"barbican-4186-account-create-update-cdtpf\" (UID: \"9d2e1f43-e047-4825-9457-a3a9bcfba205\") " pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.447733 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.466925 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knqc8\" (UniqueName: \"kubernetes.io/projected/9d2e1f43-e047-4825-9457-a3a9bcfba205-kube-api-access-knqc8\") pod \"barbican-4186-account-create-update-cdtpf\" (UID: \"9d2e1f43-e047-4825-9457-a3a9bcfba205\") " pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.472467 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.517580 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-combined-ca-bundle\") pod \"keystone-db-sync-tvtzt\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.517714 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-operator-scripts\") pod \"cinder-573f-account-create-update-gq8pp\" (UID: \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\") " pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.517796 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-operator-scripts\") pod \"heat-db-create-cxxpb\" (UID: \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\") " pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.517906 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95dfl\" (UniqueName: \"kubernetes.io/projected/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-kube-api-access-95dfl\") pod \"heat-db-create-cxxpb\" (UID: \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\") " pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.517939 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnhhh\" (UniqueName: \"kubernetes.io/projected/704f2e28-f375-4a95-a680-87e1bcb93058-kube-api-access-bnhhh\") pod \"keystone-db-sync-tvtzt\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.517981 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd5h4\" (UniqueName: \"kubernetes.io/projected/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-kube-api-access-qd5h4\") pod \"cinder-573f-account-create-update-gq8pp\" (UID: \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\") " pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.518051 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-config-data\") pod \"keystone-db-sync-tvtzt\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.518612 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-operator-scripts\") pod \"cinder-573f-account-create-update-gq8pp\" (UID: \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\") " pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.519051 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-operator-scripts\") pod \"heat-db-create-cxxpb\" (UID: \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\") " pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.555681 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9dphq"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.557011 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.578816 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95dfl\" (UniqueName: \"kubernetes.io/projected/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-kube-api-access-95dfl\") pod \"heat-db-create-cxxpb\" (UID: \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\") " pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.582218 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd5h4\" (UniqueName: \"kubernetes.io/projected/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-kube-api-access-qd5h4\") pod \"cinder-573f-account-create-update-gq8pp\" (UID: \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\") " pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.610229 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-6be2-account-create-update-4g2xf"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.611573 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.613939 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.622391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnhhh\" (UniqueName: \"kubernetes.io/projected/704f2e28-f375-4a95-a680-87e1bcb93058-kube-api-access-bnhhh\") pod \"keystone-db-sync-tvtzt\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.622475 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-config-data\") pod \"keystone-db-sync-tvtzt\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.622516 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-combined-ca-bundle\") pod \"keystone-db-sync-tvtzt\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.626376 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6be2-account-create-update-4g2xf"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.630965 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-config-data\") pod \"keystone-db-sync-tvtzt\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.643284 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-combined-ca-bundle\") pod \"keystone-db-sync-tvtzt\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.652994 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnhhh\" (UniqueName: \"kubernetes.io/projected/704f2e28-f375-4a95-a680-87e1bcb93058-kube-api-access-bnhhh\") pod \"keystone-db-sync-tvtzt\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.653991 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9dphq"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.727706 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/309dcb32-e680-454f-a815-05e689a3f35e-operator-scripts\") pod \"heat-6be2-account-create-update-4g2xf\" (UID: \"309dcb32-e680-454f-a815-05e689a3f35e\") " pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.727815 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52rrv\" (UniqueName: \"kubernetes.io/projected/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-kube-api-access-52rrv\") pod \"neutron-db-create-9dphq\" (UID: \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\") " pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.727900 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-operator-scripts\") pod \"neutron-db-create-9dphq\" (UID: \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\") " pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.728002 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td2fd\" (UniqueName: \"kubernetes.io/projected/309dcb32-e680-454f-a815-05e689a3f35e-kube-api-access-td2fd\") pod \"heat-6be2-account-create-update-4g2xf\" (UID: \"309dcb32-e680-454f-a815-05e689a3f35e\") " pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.814564 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.828062 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e400-account-create-update-2b57n"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.829445 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52rrv\" (UniqueName: \"kubernetes.io/projected/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-kube-api-access-52rrv\") pod \"neutron-db-create-9dphq\" (UID: \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\") " pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.829617 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.829619 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-operator-scripts\") pod \"neutron-db-create-9dphq\" (UID: \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\") " pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.830121 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td2fd\" (UniqueName: \"kubernetes.io/projected/309dcb32-e680-454f-a815-05e689a3f35e-kube-api-access-td2fd\") pod \"heat-6be2-account-create-update-4g2xf\" (UID: \"309dcb32-e680-454f-a815-05e689a3f35e\") " pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.830942 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/309dcb32-e680-454f-a815-05e689a3f35e-operator-scripts\") pod \"heat-6be2-account-create-update-4g2xf\" (UID: \"309dcb32-e680-454f-a815-05e689a3f35e\") " pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.830798 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.830363 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-operator-scripts\") pod \"neutron-db-create-9dphq\" (UID: \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\") " pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.831756 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/309dcb32-e680-454f-a815-05e689a3f35e-operator-scripts\") pod \"heat-6be2-account-create-update-4g2xf\" (UID: \"309dcb32-e680-454f-a815-05e689a3f35e\") " pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.839682 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e400-account-create-update-2b57n"] Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.842622 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.848473 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td2fd\" (UniqueName: \"kubernetes.io/projected/309dcb32-e680-454f-a815-05e689a3f35e-kube-api-access-td2fd\") pod \"heat-6be2-account-create-update-4g2xf\" (UID: \"309dcb32-e680-454f-a815-05e689a3f35e\") " pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.851837 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52rrv\" (UniqueName: \"kubernetes.io/projected/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-kube-api-access-52rrv\") pod \"neutron-db-create-9dphq\" (UID: \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\") " pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.858972 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.860232 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.863193 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.865804 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.935538 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4818586d-6c0a-4b51-acf3-51605cd25d5f-operator-scripts\") pod \"neutron-e400-account-create-update-2b57n\" (UID: \"4818586d-6c0a-4b51-acf3-51605cd25d5f\") " pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.940204 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czn2g\" (UniqueName: \"kubernetes.io/projected/4818586d-6c0a-4b51-acf3-51605cd25d5f-kube-api-access-czn2g\") pod \"neutron-e400-account-create-update-2b57n\" (UID: \"4818586d-6c0a-4b51-acf3-51605cd25d5f\") " pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:02 crc kubenswrapper[4853]: I1209 17:20:02.990795 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:03 crc kubenswrapper[4853]: I1209 17:20:03.016823 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:03 crc kubenswrapper[4853]: I1209 17:20:03.053300 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czn2g\" (UniqueName: \"kubernetes.io/projected/4818586d-6c0a-4b51-acf3-51605cd25d5f-kube-api-access-czn2g\") pod \"neutron-e400-account-create-update-2b57n\" (UID: \"4818586d-6c0a-4b51-acf3-51605cd25d5f\") " pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:03 crc kubenswrapper[4853]: I1209 17:20:03.053855 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4818586d-6c0a-4b51-acf3-51605cd25d5f-operator-scripts\") pod \"neutron-e400-account-create-update-2b57n\" (UID: \"4818586d-6c0a-4b51-acf3-51605cd25d5f\") " pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:03 crc kubenswrapper[4853]: I1209 17:20:03.070410 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4818586d-6c0a-4b51-acf3-51605cd25d5f-operator-scripts\") pod \"neutron-e400-account-create-update-2b57n\" (UID: \"4818586d-6c0a-4b51-acf3-51605cd25d5f\") " pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:03 crc kubenswrapper[4853]: I1209 17:20:03.085128 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czn2g\" (UniqueName: \"kubernetes.io/projected/4818586d-6c0a-4b51-acf3-51605cd25d5f-kube-api-access-czn2g\") pod \"neutron-e400-account-create-update-2b57n\" (UID: \"4818586d-6c0a-4b51-acf3-51605cd25d5f\") " pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:03 crc kubenswrapper[4853]: I1209 17:20:03.257502 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:03 crc kubenswrapper[4853]: I1209 17:20:03.928860 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.079239 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-swiftconf\") pod \"93e3401b-eae8-4c50-a73b-686525de14a2\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.079338 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/93e3401b-eae8-4c50-a73b-686525de14a2-kube-api-access-6gqdq\") pod \"93e3401b-eae8-4c50-a73b-686525de14a2\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.079419 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-scripts\") pod \"93e3401b-eae8-4c50-a73b-686525de14a2\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.079491 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93e3401b-eae8-4c50-a73b-686525de14a2-etc-swift\") pod \"93e3401b-eae8-4c50-a73b-686525de14a2\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.079570 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-combined-ca-bundle\") pod \"93e3401b-eae8-4c50-a73b-686525de14a2\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.079704 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-ring-data-devices\") pod \"93e3401b-eae8-4c50-a73b-686525de14a2\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.079738 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-dispersionconf\") pod \"93e3401b-eae8-4c50-a73b-686525de14a2\" (UID: \"93e3401b-eae8-4c50-a73b-686525de14a2\") " Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.084737 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "93e3401b-eae8-4c50-a73b-686525de14a2" (UID: "93e3401b-eae8-4c50-a73b-686525de14a2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.085443 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e3401b-eae8-4c50-a73b-686525de14a2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "93e3401b-eae8-4c50-a73b-686525de14a2" (UID: "93e3401b-eae8-4c50-a73b-686525de14a2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.088830 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e3401b-eae8-4c50-a73b-686525de14a2-kube-api-access-6gqdq" (OuterVolumeSpecName: "kube-api-access-6gqdq") pod "93e3401b-eae8-4c50-a73b-686525de14a2" (UID: "93e3401b-eae8-4c50-a73b-686525de14a2"). InnerVolumeSpecName "kube-api-access-6gqdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.113784 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "93e3401b-eae8-4c50-a73b-686525de14a2" (UID: "93e3401b-eae8-4c50-a73b-686525de14a2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.116727 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "93e3401b-eae8-4c50-a73b-686525de14a2" (UID: "93e3401b-eae8-4c50-a73b-686525de14a2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.181995 4853 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/93e3401b-eae8-4c50-a73b-686525de14a2-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.182028 4853 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.182039 4853 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.182047 4853 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.182058 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/93e3401b-eae8-4c50-a73b-686525de14a2-kube-api-access-6gqdq\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.208099 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-scripts" (OuterVolumeSpecName: "scripts") pod "93e3401b-eae8-4c50-a73b-686525de14a2" (UID: "93e3401b-eae8-4c50-a73b-686525de14a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.231740 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93e3401b-eae8-4c50-a73b-686525de14a2" (UID: "93e3401b-eae8-4c50-a73b-686525de14a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.284540 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93e3401b-eae8-4c50-a73b-686525de14a2-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.284853 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e3401b-eae8-4c50-a73b-686525de14a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.726714 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v2tq9" Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.726797 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v2tq9" event={"ID":"93e3401b-eae8-4c50-a73b-686525de14a2","Type":"ContainerDied","Data":"3b4af3c1d8d8262677a112ae7bb7a898f43a2ae991cdbc0d29b8d205c536cdcb"} Dec 09 17:20:04 crc kubenswrapper[4853]: I1209 17:20:04.727230 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b4af3c1d8d8262677a112ae7bb7a898f43a2ae991cdbc0d29b8d205c536cdcb" Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.134903 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rhs9j"] Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.695674 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9dphq"] Dec 09 17:20:05 crc kubenswrapper[4853]: W1209 17:20:05.696246 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99cf6f02_4548_4822_8cea_219f8f35db7d.slice/crio-983c43d4150648db102fc8d7e7fd064d805cb20b005864d98fff8a20d6eea4b7 WatchSource:0}: Error finding container 983c43d4150648db102fc8d7e7fd064d805cb20b005864d98fff8a20d6eea4b7: Status 404 returned error can't find the container with id 983c43d4150648db102fc8d7e7fd064d805cb20b005864d98fff8a20d6eea4b7 Dec 09 17:20:05 crc kubenswrapper[4853]: W1209 17:20:05.701161 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd853bf1e_8a4b_4ac7_8165_c0fcc3d7357d.slice/crio-7d6df7231dd59f4aba1a5301f537cfdecc55208c63a5ba75dd607794f50b9dfe WatchSource:0}: Error finding container 7d6df7231dd59f4aba1a5301f537cfdecc55208c63a5ba75dd607794f50b9dfe: Status 404 returned error can't find the container with id 7d6df7231dd59f4aba1a5301f537cfdecc55208c63a5ba75dd607794f50b9dfe Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.724153 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4186-account-create-update-cdtpf"] Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.743882 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-cxxpb"] Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.750376 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wpnd5"] Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.776762 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tvtzt" event={"ID":"704f2e28-f375-4a95-a680-87e1bcb93058","Type":"ContainerStarted","Data":"11c1d762f85474d4c0c44f1a889afae65a382b6d628ef8be5540b3477ff1dd0e"} Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.782897 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rhs9j" event={"ID":"4c6e5f07-60db-4bae-9f04-8c5915067796","Type":"ContainerStarted","Data":"49b24ea6857cf3c0600bfedea3e664b0522da9b5fb8aa28138211d1d2841d11f"} Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.782928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rhs9j" event={"ID":"4c6e5f07-60db-4bae-9f04-8c5915067796","Type":"ContainerStarted","Data":"86bdbb679289578e7985e2295e6706460fce2a82700722546e30bd4c77f8a83f"} Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.794312 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6be2-account-create-update-4g2xf"] Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.818862 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tvtzt"] Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.825653 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vqpmd" event={"ID":"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f","Type":"ContainerStarted","Data":"5c4d4c5782818a71fc3061056fbff6a0d54f8c8975600bb8c22a43288dbab5a6"} Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.833384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"62a55e7b-220c-44a9-acdd-ad06588c155e","Type":"ContainerStarted","Data":"cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1"} Dec 09 17:20:05 crc kubenswrapper[4853]: W1209 17:20:05.835979 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4818586d_6c0a_4b51_acf3_51605cd25d5f.slice/crio-2c9cfe795b7832afd165f0f1bdc4ba192bd04d177222cea9f8106ceea086bae0 WatchSource:0}: Error finding container 2c9cfe795b7832afd165f0f1bdc4ba192bd04d177222cea9f8106ceea086bae0: Status 404 returned error can't find the container with id 2c9cfe795b7832afd165f0f1bdc4ba192bd04d177222cea9f8106ceea086bae0 Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.836060 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6be2-account-create-update-4g2xf" event={"ID":"309dcb32-e680-454f-a815-05e689a3f35e","Type":"ContainerStarted","Data":"6cfa5931eebc2ea0a3151d85c9336fd5a99cbb6543746b60bc67b2b911f765a0"} Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.836735 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e400-account-create-update-2b57n"] Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.843755 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4186-account-create-update-cdtpf" event={"ID":"9d2e1f43-e047-4825-9457-a3a9bcfba205","Type":"ContainerStarted","Data":"06857c5bb58e7082d94ce8445a1d6d1c68ec6f1734acc333a85eeeb08aa02b26"} Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.855562 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9dphq" event={"ID":"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d","Type":"ContainerStarted","Data":"7d6df7231dd59f4aba1a5301f537cfdecc55208c63a5ba75dd607794f50b9dfe"} Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.860777 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-573f-account-create-update-gq8pp"] Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.862425 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-cxxpb" event={"ID":"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f","Type":"ContainerStarted","Data":"af96a2cd1cf758465c0a4fcadb07c8deafe5746a0e69c52cc6993fd880b4e872"} Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.864096 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wpnd5" event={"ID":"99cf6f02-4548-4822-8cea-219f8f35db7d","Type":"ContainerStarted","Data":"983c43d4150648db102fc8d7e7fd064d805cb20b005864d98fff8a20d6eea4b7"} Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.932099 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-vqpmd" podStartSLOduration=7.3971276360000005 podStartE2EDuration="25.932074235s" podCreationTimestamp="2025-12-09 17:19:40 +0000 UTC" firstStartedPulling="2025-12-09 17:19:45.542816654 +0000 UTC m=+1412.477555836" lastFinishedPulling="2025-12-09 17:20:04.077763253 +0000 UTC m=+1431.012502435" observedRunningTime="2025-12-09 17:20:05.84826952 +0000 UTC m=+1432.783008702" watchObservedRunningTime="2025-12-09 17:20:05.932074235 +0000 UTC m=+1432.866813417" Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.942979 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.966340667 podStartE2EDuration="13.94296003s" podCreationTimestamp="2025-12-09 17:19:52 +0000 UTC" firstStartedPulling="2025-12-09 17:19:53.062261662 +0000 UTC m=+1419.997000834" lastFinishedPulling="2025-12-09 17:20:04.038881015 +0000 UTC m=+1430.973620197" observedRunningTime="2025-12-09 17:20:05.87293013 +0000 UTC m=+1432.807669312" watchObservedRunningTime="2025-12-09 17:20:05.94296003 +0000 UTC m=+1432.877699202" Dec 09 17:20:05 crc kubenswrapper[4853]: I1209 17:20:05.975737 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.551505 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.554843 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="prometheus" containerID="cri-o://4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76" gracePeriod=600 Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.555148 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="thanos-sidecar" containerID="cri-o://5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531" gracePeriod=600 Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.555233 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="config-reloader" containerID="cri-o://02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a" gracePeriod=600 Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.876873 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-573f-account-create-update-gq8pp" event={"ID":"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85","Type":"ContainerStarted","Data":"99eef2dd2dcfdd11660903a3442c1d3f0e251da047a01d7a5162830f83702329"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.877328 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-573f-account-create-update-gq8pp" event={"ID":"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85","Type":"ContainerStarted","Data":"b9c38cbce435c5bcf9f015e79c2f231fd8d2020b20653284d3a3752ba6f57558"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.880761 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6be2-account-create-update-4g2xf" event={"ID":"309dcb32-e680-454f-a815-05e689a3f35e","Type":"ContainerStarted","Data":"be3cebdbbf20436d8dc55e114f62cee403ced6ef5f6c706fc3a95ee0d299360e"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.889884 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-cxxpb" event={"ID":"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f","Type":"ContainerStarted","Data":"260d49b2f46ce80b2cd4512a948f5a8b328e10ba10a48ee87d77f096f27965f5"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.900729 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-573f-account-create-update-gq8pp" podStartSLOduration=4.900704581 podStartE2EDuration="4.900704581s" podCreationTimestamp="2025-12-09 17:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:06.891937306 +0000 UTC m=+1433.826676478" watchObservedRunningTime="2025-12-09 17:20:06.900704581 +0000 UTC m=+1433.835443773" Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.902510 4853 generic.go:334] "Generic (PLEG): container finished" podID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerID="5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531" exitCode=0 Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.902539 4853 generic.go:334] "Generic (PLEG): container finished" podID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerID="4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76" exitCode=0 Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.902616 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerDied","Data":"5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.902642 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerDied","Data":"4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.904147 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e400-account-create-update-2b57n" event={"ID":"4818586d-6c0a-4b51-acf3-51605cd25d5f","Type":"ContainerStarted","Data":"f75b2e07c359ce6a7266a042ce0c0b76b9cfd87bb72d4f253b9f41c501cf6390"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.904171 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e400-account-create-update-2b57n" event={"ID":"4818586d-6c0a-4b51-acf3-51605cd25d5f","Type":"ContainerStarted","Data":"2c9cfe795b7832afd165f0f1bdc4ba192bd04d177222cea9f8106ceea086bae0"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.913136 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4186-account-create-update-cdtpf" event={"ID":"9d2e1f43-e047-4825-9457-a3a9bcfba205","Type":"ContainerStarted","Data":"f29959f992e411c94b3189e6fcc5f1a5a272ceab6fd7e73c6e582057f41555a4"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.918129 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wpnd5" event={"ID":"99cf6f02-4548-4822-8cea-219f8f35db7d","Type":"ContainerStarted","Data":"7b33d205692c9cfcf00bbf6bac9b579bec7c25b59d488629152e7697c1697ad8"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.921478 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"13feafad7f3e7a1f2fd2b13129a1d706d054d77ce109a458d8021ebeee9073ae"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.923494 4853 generic.go:334] "Generic (PLEG): container finished" podID="4c6e5f07-60db-4bae-9f04-8c5915067796" containerID="49b24ea6857cf3c0600bfedea3e664b0522da9b5fb8aa28138211d1d2841d11f" exitCode=0 Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.923739 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rhs9j" event={"ID":"4c6e5f07-60db-4bae-9f04-8c5915067796","Type":"ContainerDied","Data":"49b24ea6857cf3c0600bfedea3e664b0522da9b5fb8aa28138211d1d2841d11f"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.935230 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-cxxpb" podStartSLOduration=4.929448726 podStartE2EDuration="4.929448726s" podCreationTimestamp="2025-12-09 17:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:06.912346258 +0000 UTC m=+1433.847085450" watchObservedRunningTime="2025-12-09 17:20:06.929448726 +0000 UTC m=+1433.864187918" Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.945770 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9dphq" event={"ID":"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d","Type":"ContainerStarted","Data":"48d4c4c26bfb4b066ab7d5ad76560feb6d239a5646547c91dc33c6b2d1172b1a"} Dec 09 17:20:06 crc kubenswrapper[4853]: I1209 17:20:06.974491 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-6be2-account-create-update-4g2xf" podStartSLOduration=4.974466936 podStartE2EDuration="4.974466936s" podCreationTimestamp="2025-12-09 17:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:06.926628297 +0000 UTC m=+1433.861367489" watchObservedRunningTime="2025-12-09 17:20:06.974466936 +0000 UTC m=+1433.909206118" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.025776 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-wpnd5" podStartSLOduration=5.025755691 podStartE2EDuration="5.025755691s" podCreationTimestamp="2025-12-09 17:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:06.961118572 +0000 UTC m=+1433.895857754" watchObservedRunningTime="2025-12-09 17:20:07.025755691 +0000 UTC m=+1433.960494873" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.032037 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-4186-account-create-update-cdtpf" podStartSLOduration=5.032014316 podStartE2EDuration="5.032014316s" podCreationTimestamp="2025-12-09 17:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:06.999081745 +0000 UTC m=+1433.933820917" watchObservedRunningTime="2025-12-09 17:20:07.032014316 +0000 UTC m=+1433.966753498" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.060830 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-e400-account-create-update-2b57n" podStartSLOduration=5.060815152 podStartE2EDuration="5.060815152s" podCreationTimestamp="2025-12-09 17:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:07.015088852 +0000 UTC m=+1433.949828034" watchObservedRunningTime="2025-12-09 17:20:07.060815152 +0000 UTC m=+1433.995554334" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.073381 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-9dphq" podStartSLOduration=5.073366123 podStartE2EDuration="5.073366123s" podCreationTimestamp="2025-12-09 17:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:07.044619179 +0000 UTC m=+1433.979358371" watchObservedRunningTime="2025-12-09 17:20:07.073366123 +0000 UTC m=+1434.008105305" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.755121 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.861067 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.909740 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e5f07-60db-4bae-9f04-8c5915067796-operator-scripts\") pod \"4c6e5f07-60db-4bae-9f04-8c5915067796\" (UID: \"4c6e5f07-60db-4bae-9f04-8c5915067796\") " Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.912313 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p54dw\" (UniqueName: \"kubernetes.io/projected/4c6e5f07-60db-4bae-9f04-8c5915067796-kube-api-access-p54dw\") pod \"4c6e5f07-60db-4bae-9f04-8c5915067796\" (UID: \"4c6e5f07-60db-4bae-9f04-8c5915067796\") " Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.912171 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c6e5f07-60db-4bae-9f04-8c5915067796-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c6e5f07-60db-4bae-9f04-8c5915067796" (UID: "4c6e5f07-60db-4bae-9f04-8c5915067796"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.914758 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6e5f07-60db-4bae-9f04-8c5915067796-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.936850 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6e5f07-60db-4bae-9f04-8c5915067796-kube-api-access-p54dw" (OuterVolumeSpecName: "kube-api-access-p54dw") pod "4c6e5f07-60db-4bae-9f04-8c5915067796" (UID: "4c6e5f07-60db-4bae-9f04-8c5915067796"). InnerVolumeSpecName "kube-api-access-p54dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.957580 4853 generic.go:334] "Generic (PLEG): container finished" podID="99cf6f02-4548-4822-8cea-219f8f35db7d" containerID="7b33d205692c9cfcf00bbf6bac9b579bec7c25b59d488629152e7697c1697ad8" exitCode=0 Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.957633 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wpnd5" event={"ID":"99cf6f02-4548-4822-8cea-219f8f35db7d","Type":"ContainerDied","Data":"7b33d205692c9cfcf00bbf6bac9b579bec7c25b59d488629152e7697c1697ad8"} Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.964288 4853 generic.go:334] "Generic (PLEG): container finished" podID="2ea83a61-c2d2-44f7-86a2-fe7279fc4b85" containerID="99eef2dd2dcfdd11660903a3442c1d3f0e251da047a01d7a5162830f83702329" exitCode=0 Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.964476 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-573f-account-create-update-gq8pp" event={"ID":"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85","Type":"ContainerDied","Data":"99eef2dd2dcfdd11660903a3442c1d3f0e251da047a01d7a5162830f83702329"} Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.976624 4853 generic.go:334] "Generic (PLEG): container finished" podID="309dcb32-e680-454f-a815-05e689a3f35e" containerID="be3cebdbbf20436d8dc55e114f62cee403ced6ef5f6c706fc3a95ee0d299360e" exitCode=0 Dec 09 17:20:07 crc kubenswrapper[4853]: I1209 17:20:07.976735 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6be2-account-create-update-4g2xf" event={"ID":"309dcb32-e680-454f-a815-05e689a3f35e","Type":"ContainerDied","Data":"be3cebdbbf20436d8dc55e114f62cee403ced6ef5f6c706fc3a95ee0d299360e"} Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.003447 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rhs9j" event={"ID":"4c6e5f07-60db-4bae-9f04-8c5915067796","Type":"ContainerDied","Data":"86bdbb679289578e7985e2295e6706460fce2a82700722546e30bd4c77f8a83f"} Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.003485 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86bdbb679289578e7985e2295e6706460fce2a82700722546e30bd4c77f8a83f" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.003539 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rhs9j" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.016749 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-config\") pod \"39b91583-7835-4bb9-ad7f-32fae11f2b77\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.016829 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-thanos-prometheus-http-client-file\") pod \"39b91583-7835-4bb9-ad7f-32fae11f2b77\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.016948 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-tls-assets\") pod \"39b91583-7835-4bb9-ad7f-32fae11f2b77\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.017027 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"39b91583-7835-4bb9-ad7f-32fae11f2b77\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.017102 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-web-config\") pod \"39b91583-7835-4bb9-ad7f-32fae11f2b77\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.018971 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kzbw\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-kube-api-access-2kzbw\") pod \"39b91583-7835-4bb9-ad7f-32fae11f2b77\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.019030 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39b91583-7835-4bb9-ad7f-32fae11f2b77-prometheus-metric-storage-rulefiles-0\") pod \"39b91583-7835-4bb9-ad7f-32fae11f2b77\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.019074 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39b91583-7835-4bb9-ad7f-32fae11f2b77-config-out\") pod \"39b91583-7835-4bb9-ad7f-32fae11f2b77\" (UID: \"39b91583-7835-4bb9-ad7f-32fae11f2b77\") " Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.021505 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p54dw\" (UniqueName: \"kubernetes.io/projected/4c6e5f07-60db-4bae-9f04-8c5915067796-kube-api-access-p54dw\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.023406 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b91583-7835-4bb9-ad7f-32fae11f2b77-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "39b91583-7835-4bb9-ad7f-32fae11f2b77" (UID: "39b91583-7835-4bb9-ad7f-32fae11f2b77"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.026575 4853 generic.go:334] "Generic (PLEG): container finished" podID="d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d" containerID="48d4c4c26bfb4b066ab7d5ad76560feb6d239a5646547c91dc33c6b2d1172b1a" exitCode=0 Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.026710 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9dphq" event={"ID":"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d","Type":"ContainerDied","Data":"48d4c4c26bfb4b066ab7d5ad76560feb6d239a5646547c91dc33c6b2d1172b1a"} Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.027190 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "39b91583-7835-4bb9-ad7f-32fae11f2b77" (UID: "39b91583-7835-4bb9-ad7f-32fae11f2b77"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.034065 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b91583-7835-4bb9-ad7f-32fae11f2b77-config-out" (OuterVolumeSpecName: "config-out") pod "39b91583-7835-4bb9-ad7f-32fae11f2b77" (UID: "39b91583-7835-4bb9-ad7f-32fae11f2b77"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.034964 4853 generic.go:334] "Generic (PLEG): container finished" podID="6bdeb0f7-5749-4ef1-baca-b0e6f992c48f" containerID="260d49b2f46ce80b2cd4512a948f5a8b328e10ba10a48ee87d77f096f27965f5" exitCode=0 Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.035049 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-cxxpb" event={"ID":"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f","Type":"ContainerDied","Data":"260d49b2f46ce80b2cd4512a948f5a8b328e10ba10a48ee87d77f096f27965f5"} Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.036833 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "39b91583-7835-4bb9-ad7f-32fae11f2b77" (UID: "39b91583-7835-4bb9-ad7f-32fae11f2b77"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.048127 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "39b91583-7835-4bb9-ad7f-32fae11f2b77" (UID: "39b91583-7835-4bb9-ad7f-32fae11f2b77"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.048943 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-config" (OuterVolumeSpecName: "config") pod "39b91583-7835-4bb9-ad7f-32fae11f2b77" (UID: "39b91583-7835-4bb9-ad7f-32fae11f2b77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.053231 4853 generic.go:334] "Generic (PLEG): container finished" podID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerID="02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a" exitCode=0 Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.053292 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerDied","Data":"02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a"} Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.053318 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39b91583-7835-4bb9-ad7f-32fae11f2b77","Type":"ContainerDied","Data":"c32996bf4c134e9435a41247c1b7ac7d9dcba107c194213179d246718fe24070"} Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.053333 4853 scope.go:117] "RemoveContainer" containerID="5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.053472 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.057460 4853 generic.go:334] "Generic (PLEG): container finished" podID="4818586d-6c0a-4b51-acf3-51605cd25d5f" containerID="f75b2e07c359ce6a7266a042ce0c0b76b9cfd87bb72d4f253b9f41c501cf6390" exitCode=0 Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.057519 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e400-account-create-update-2b57n" event={"ID":"4818586d-6c0a-4b51-acf3-51605cd25d5f","Type":"ContainerDied","Data":"f75b2e07c359ce6a7266a042ce0c0b76b9cfd87bb72d4f253b9f41c501cf6390"} Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.066693 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-kube-api-access-2kzbw" (OuterVolumeSpecName: "kube-api-access-2kzbw") pod "39b91583-7835-4bb9-ad7f-32fae11f2b77" (UID: "39b91583-7835-4bb9-ad7f-32fae11f2b77"). InnerVolumeSpecName "kube-api-access-2kzbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.069833 4853 generic.go:334] "Generic (PLEG): container finished" podID="9d2e1f43-e047-4825-9457-a3a9bcfba205" containerID="f29959f992e411c94b3189e6fcc5f1a5a272ceab6fd7e73c6e582057f41555a4" exitCode=0 Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.069892 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4186-account-create-update-cdtpf" event={"ID":"9d2e1f43-e047-4825-9457-a3a9bcfba205","Type":"ContainerDied","Data":"f29959f992e411c94b3189e6fcc5f1a5a272ceab6fd7e73c6e582057f41555a4"} Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.112053 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-web-config" (OuterVolumeSpecName: "web-config") pod "39b91583-7835-4bb9-ad7f-32fae11f2b77" (UID: "39b91583-7835-4bb9-ad7f-32fae11f2b77"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.124120 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kzbw\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-kube-api-access-2kzbw\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.124150 4853 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39b91583-7835-4bb9-ad7f-32fae11f2b77-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.124162 4853 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39b91583-7835-4bb9-ad7f-32fae11f2b77-config-out\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.124172 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.124181 4853 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.124191 4853 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39b91583-7835-4bb9-ad7f-32fae11f2b77-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.124240 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.124251 4853 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39b91583-7835-4bb9-ad7f-32fae11f2b77-web-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.151484 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.226044 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.369099 4853 scope.go:117] "RemoveContainer" containerID="02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.392289 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.402298 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.402953 4853 scope.go:117] "RemoveContainer" containerID="4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.425733 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.426222 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="thanos-sidecar" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426243 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="thanos-sidecar" Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.426261 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="prometheus" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426268 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="prometheus" Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.426290 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6e5f07-60db-4bae-9f04-8c5915067796" containerName="mariadb-database-create" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426297 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6e5f07-60db-4bae-9f04-8c5915067796" containerName="mariadb-database-create" Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.426332 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="config-reloader" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426340 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="config-reloader" Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.426352 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e3401b-eae8-4c50-a73b-686525de14a2" containerName="swift-ring-rebalance" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426361 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e3401b-eae8-4c50-a73b-686525de14a2" containerName="swift-ring-rebalance" Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.426375 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="init-config-reloader" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426383 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="init-config-reloader" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426642 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e3401b-eae8-4c50-a73b-686525de14a2" containerName="swift-ring-rebalance" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426664 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6e5f07-60db-4bae-9f04-8c5915067796" containerName="mariadb-database-create" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426691 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="config-reloader" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426708 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="prometheus" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.426719 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="thanos-sidecar" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.433003 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.436170 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.436382 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.436522 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.436674 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-g6cfs" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.436845 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.437390 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.444954 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.457365 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.469211 4853 scope.go:117] "RemoveContainer" containerID="ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.497650 4853 scope.go:117] "RemoveContainer" containerID="5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531" Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.498092 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531\": container with ID starting with 5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531 not found: ID does not exist" containerID="5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.498143 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531"} err="failed to get container status \"5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531\": rpc error: code = NotFound desc = could not find container \"5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531\": container with ID starting with 5398e3349d7725e2c797e96e7b9975119bd204a42b78c89bcce2f54e8523f531 not found: ID does not exist" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.498171 4853 scope.go:117] "RemoveContainer" containerID="02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a" Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.498442 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a\": container with ID starting with 02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a not found: ID does not exist" containerID="02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.498488 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a"} err="failed to get container status \"02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a\": rpc error: code = NotFound desc = could not find container \"02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a\": container with ID starting with 02418096eaa09b5702cd40c4992ad8a571f9b2ad157f2d4ace6b61c87aa08f3a not found: ID does not exist" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.498511 4853 scope.go:117] "RemoveContainer" containerID="4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76" Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.498834 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76\": container with ID starting with 4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76 not found: ID does not exist" containerID="4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.498884 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76"} err="failed to get container status \"4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76\": rpc error: code = NotFound desc = could not find container \"4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76\": container with ID starting with 4a39fb9ae40c84d85e93118c55c1a270d6cb209b1e655e19978e26e6b039fe76 not found: ID does not exist" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.498921 4853 scope.go:117] "RemoveContainer" containerID="ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b" Dec 09 17:20:08 crc kubenswrapper[4853]: E1209 17:20:08.499321 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b\": container with ID starting with ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b not found: ID does not exist" containerID="ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.499346 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b"} err="failed to get container status \"ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b\": rpc error: code = NotFound desc = could not find container \"ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b\": container with ID starting with ea59645b67dfc0df9603897c0b19559b6bcc93602752630f7960823a3f6ea88b not found: ID does not exist" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.635233 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.635638 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.635663 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lltc\" (UniqueName: \"kubernetes.io/projected/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-kube-api-access-9lltc\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.635686 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.635710 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.635886 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.635928 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.636034 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.636110 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.636189 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.636228 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.737792 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.737870 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.737893 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lltc\" (UniqueName: \"kubernetes.io/projected/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-kube-api-access-9lltc\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.737915 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.737938 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.737972 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.737990 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.738029 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.738063 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.738082 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.738107 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.739560 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.740025 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.748909 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.748502 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.749617 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.757396 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.757674 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.757928 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.758428 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.759852 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.762099 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lltc\" (UniqueName: \"kubernetes.io/projected/c5edba71-6b69-4f76-9dde-ed6c7a7ecb71-kube-api-access-9lltc\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:08 crc kubenswrapper[4853]: I1209 17:20:08.803725 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"prometheus-metric-storage-0\" (UID: \"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71\") " pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:09 crc kubenswrapper[4853]: I1209 17:20:09.068586 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:09 crc kubenswrapper[4853]: I1209 17:20:09.090935 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"2ec4ffc6c4c1e92ada54e832ea770730fc6df54b94a884340e8720120c0f8879"} Dec 09 17:20:09 crc kubenswrapper[4853]: I1209 17:20:09.090990 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"e68e70cb449a5be3d4bf01ce4cf2d6dec286069a62dde9d57f06e02c1c0f790d"} Dec 09 17:20:09 crc kubenswrapper[4853]: I1209 17:20:09.592943 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" path="/var/lib/kubelet/pods/39b91583-7835-4bb9-ad7f-32fae11f2b77/volumes" Dec 09 17:20:10 crc kubenswrapper[4853]: I1209 17:20:10.111655 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"e978b022ee8f426fff9b1ce3f710db6f0fe7a853f4a349df80f3b990591ddd51"} Dec 09 17:20:10 crc kubenswrapper[4853]: I1209 17:20:10.249015 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 17:20:10 crc kubenswrapper[4853]: I1209 17:20:10.856426 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="39b91583-7835-4bb9-ad7f-32fae11f2b77" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.125443 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71","Type":"ContainerStarted","Data":"32239bf281f48a9e21e30e3fd11d6eb4fed24a343dd5381389d2f393265530e6"} Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.128945 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9dphq" event={"ID":"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d","Type":"ContainerDied","Data":"7d6df7231dd59f4aba1a5301f537cfdecc55208c63a5ba75dd607794f50b9dfe"} Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.128981 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d6df7231dd59f4aba1a5301f537cfdecc55208c63a5ba75dd607794f50b9dfe" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.351004 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.397825 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.410002 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.418457 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.433267 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.446406 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.458884 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.507416 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95dfl\" (UniqueName: \"kubernetes.io/projected/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-kube-api-access-95dfl\") pod \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\" (UID: \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.507501 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-operator-scripts\") pod \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\" (UID: \"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.507651 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52rrv\" (UniqueName: \"kubernetes.io/projected/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-kube-api-access-52rrv\") pod \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\" (UID: \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.507694 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2e1f43-e047-4825-9457-a3a9bcfba205-operator-scripts\") pod \"9d2e1f43-e047-4825-9457-a3a9bcfba205\" (UID: \"9d2e1f43-e047-4825-9457-a3a9bcfba205\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.507727 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-operator-scripts\") pod \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\" (UID: \"d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.507777 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knqc8\" (UniqueName: \"kubernetes.io/projected/9d2e1f43-e047-4825-9457-a3a9bcfba205-kube-api-access-knqc8\") pod \"9d2e1f43-e047-4825-9457-a3a9bcfba205\" (UID: \"9d2e1f43-e047-4825-9457-a3a9bcfba205\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.507805 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99cf6f02-4548-4822-8cea-219f8f35db7d-operator-scripts\") pod \"99cf6f02-4548-4822-8cea-219f8f35db7d\" (UID: \"99cf6f02-4548-4822-8cea-219f8f35db7d\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.507840 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nj98\" (UniqueName: \"kubernetes.io/projected/99cf6f02-4548-4822-8cea-219f8f35db7d-kube-api-access-7nj98\") pod \"99cf6f02-4548-4822-8cea-219f8f35db7d\" (UID: \"99cf6f02-4548-4822-8cea-219f8f35db7d\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.508617 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2e1f43-e047-4825-9457-a3a9bcfba205-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d2e1f43-e047-4825-9457-a3a9bcfba205" (UID: "9d2e1f43-e047-4825-9457-a3a9bcfba205"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.509181 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bdeb0f7-5749-4ef1-baca-b0e6f992c48f" (UID: "6bdeb0f7-5749-4ef1-baca-b0e6f992c48f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.509700 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.509723 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2e1f43-e047-4825-9457-a3a9bcfba205-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.510229 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99cf6f02-4548-4822-8cea-219f8f35db7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99cf6f02-4548-4822-8cea-219f8f35db7d" (UID: "99cf6f02-4548-4822-8cea-219f8f35db7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.512989 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d" (UID: "d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.529073 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-kube-api-access-52rrv" (OuterVolumeSpecName: "kube-api-access-52rrv") pod "d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d" (UID: "d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d"). InnerVolumeSpecName "kube-api-access-52rrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.530337 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99cf6f02-4548-4822-8cea-219f8f35db7d-kube-api-access-7nj98" (OuterVolumeSpecName: "kube-api-access-7nj98") pod "99cf6f02-4548-4822-8cea-219f8f35db7d" (UID: "99cf6f02-4548-4822-8cea-219f8f35db7d"). InnerVolumeSpecName "kube-api-access-7nj98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.535031 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-kube-api-access-95dfl" (OuterVolumeSpecName: "kube-api-access-95dfl") pod "6bdeb0f7-5749-4ef1-baca-b0e6f992c48f" (UID: "6bdeb0f7-5749-4ef1-baca-b0e6f992c48f"). InnerVolumeSpecName "kube-api-access-95dfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.554736 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2e1f43-e047-4825-9457-a3a9bcfba205-kube-api-access-knqc8" (OuterVolumeSpecName: "kube-api-access-knqc8") pod "9d2e1f43-e047-4825-9457-a3a9bcfba205" (UID: "9d2e1f43-e047-4825-9457-a3a9bcfba205"). InnerVolumeSpecName "kube-api-access-knqc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.610702 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-operator-scripts\") pod \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\" (UID: \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.610756 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd5h4\" (UniqueName: \"kubernetes.io/projected/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-kube-api-access-qd5h4\") pod \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\" (UID: \"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.610865 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czn2g\" (UniqueName: \"kubernetes.io/projected/4818586d-6c0a-4b51-acf3-51605cd25d5f-kube-api-access-czn2g\") pod \"4818586d-6c0a-4b51-acf3-51605cd25d5f\" (UID: \"4818586d-6c0a-4b51-acf3-51605cd25d5f\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.610966 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/309dcb32-e680-454f-a815-05e689a3f35e-operator-scripts\") pod \"309dcb32-e680-454f-a815-05e689a3f35e\" (UID: \"309dcb32-e680-454f-a815-05e689a3f35e\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.611031 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td2fd\" (UniqueName: \"kubernetes.io/projected/309dcb32-e680-454f-a815-05e689a3f35e-kube-api-access-td2fd\") pod \"309dcb32-e680-454f-a815-05e689a3f35e\" (UID: \"309dcb32-e680-454f-a815-05e689a3f35e\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.611067 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4818586d-6c0a-4b51-acf3-51605cd25d5f-operator-scripts\") pod \"4818586d-6c0a-4b51-acf3-51605cd25d5f\" (UID: \"4818586d-6c0a-4b51-acf3-51605cd25d5f\") " Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.611171 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ea83a61-c2d2-44f7-86a2-fe7279fc4b85" (UID: "2ea83a61-c2d2-44f7-86a2-fe7279fc4b85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.611726 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309dcb32-e680-454f-a815-05e689a3f35e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "309dcb32-e680-454f-a815-05e689a3f35e" (UID: "309dcb32-e680-454f-a815-05e689a3f35e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.611747 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4818586d-6c0a-4b51-acf3-51605cd25d5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4818586d-6c0a-4b51-acf3-51605cd25d5f" (UID: "4818586d-6c0a-4b51-acf3-51605cd25d5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.612059 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4818586d-6c0a-4b51-acf3-51605cd25d5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.612089 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95dfl\" (UniqueName: \"kubernetes.io/projected/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f-kube-api-access-95dfl\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.612105 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.612117 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52rrv\" (UniqueName: \"kubernetes.io/projected/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-kube-api-access-52rrv\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.612129 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.612141 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knqc8\" (UniqueName: \"kubernetes.io/projected/9d2e1f43-e047-4825-9457-a3a9bcfba205-kube-api-access-knqc8\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.612152 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/309dcb32-e680-454f-a815-05e689a3f35e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.612164 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99cf6f02-4548-4822-8cea-219f8f35db7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.612176 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nj98\" (UniqueName: \"kubernetes.io/projected/99cf6f02-4548-4822-8cea-219f8f35db7d-kube-api-access-7nj98\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.614708 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4818586d-6c0a-4b51-acf3-51605cd25d5f-kube-api-access-czn2g" (OuterVolumeSpecName: "kube-api-access-czn2g") pod "4818586d-6c0a-4b51-acf3-51605cd25d5f" (UID: "4818586d-6c0a-4b51-acf3-51605cd25d5f"). InnerVolumeSpecName "kube-api-access-czn2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.616146 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-kube-api-access-qd5h4" (OuterVolumeSpecName: "kube-api-access-qd5h4") pod "2ea83a61-c2d2-44f7-86a2-fe7279fc4b85" (UID: "2ea83a61-c2d2-44f7-86a2-fe7279fc4b85"). InnerVolumeSpecName "kube-api-access-qd5h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.617352 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309dcb32-e680-454f-a815-05e689a3f35e-kube-api-access-td2fd" (OuterVolumeSpecName: "kube-api-access-td2fd") pod "309dcb32-e680-454f-a815-05e689a3f35e" (UID: "309dcb32-e680-454f-a815-05e689a3f35e"). InnerVolumeSpecName "kube-api-access-td2fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.715126 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd5h4\" (UniqueName: \"kubernetes.io/projected/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85-kube-api-access-qd5h4\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.715977 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czn2g\" (UniqueName: \"kubernetes.io/projected/4818586d-6c0a-4b51-acf3-51605cd25d5f-kube-api-access-czn2g\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:11 crc kubenswrapper[4853]: I1209 17:20:11.716013 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td2fd\" (UniqueName: \"kubernetes.io/projected/309dcb32-e680-454f-a815-05e689a3f35e-kube-api-access-td2fd\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.146477 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-cxxpb" event={"ID":"6bdeb0f7-5749-4ef1-baca-b0e6f992c48f","Type":"ContainerDied","Data":"af96a2cd1cf758465c0a4fcadb07c8deafe5746a0e69c52cc6993fd880b4e872"} Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.146541 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af96a2cd1cf758465c0a4fcadb07c8deafe5746a0e69c52cc6993fd880b4e872" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.146699 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-cxxpb" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.149527 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wpnd5" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.150389 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wpnd5" event={"ID":"99cf6f02-4548-4822-8cea-219f8f35db7d","Type":"ContainerDied","Data":"983c43d4150648db102fc8d7e7fd064d805cb20b005864d98fff8a20d6eea4b7"} Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.150430 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983c43d4150648db102fc8d7e7fd064d805cb20b005864d98fff8a20d6eea4b7" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.153971 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"fd961f78f75d76d548ee304b4c04c9e48375cd5d17c40cbeaee2c010f3f05b62"} Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.156460 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e400-account-create-update-2b57n" event={"ID":"4818586d-6c0a-4b51-acf3-51605cd25d5f","Type":"ContainerDied","Data":"2c9cfe795b7832afd165f0f1bdc4ba192bd04d177222cea9f8106ceea086bae0"} Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.156498 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c9cfe795b7832afd165f0f1bdc4ba192bd04d177222cea9f8106ceea086bae0" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.156567 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e400-account-create-update-2b57n" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.160813 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4186-account-create-update-cdtpf" event={"ID":"9d2e1f43-e047-4825-9457-a3a9bcfba205","Type":"ContainerDied","Data":"06857c5bb58e7082d94ce8445a1d6d1c68ec6f1734acc333a85eeeb08aa02b26"} Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.160860 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06857c5bb58e7082d94ce8445a1d6d1c68ec6f1734acc333a85eeeb08aa02b26" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.160866 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4186-account-create-update-cdtpf" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.163899 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-573f-account-create-update-gq8pp" event={"ID":"2ea83a61-c2d2-44f7-86a2-fe7279fc4b85","Type":"ContainerDied","Data":"b9c38cbce435c5bcf9f015e79c2f231fd8d2020b20653284d3a3752ba6f57558"} Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.163940 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c38cbce435c5bcf9f015e79c2f231fd8d2020b20653284d3a3752ba6f57558" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.164070 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-573f-account-create-update-gq8pp" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.167268 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9dphq" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.167296 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6be2-account-create-update-4g2xf" Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.167363 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6be2-account-create-update-4g2xf" event={"ID":"309dcb32-e680-454f-a815-05e689a3f35e","Type":"ContainerDied","Data":"6cfa5931eebc2ea0a3151d85c9336fd5a99cbb6543746b60bc67b2b911f765a0"} Dec 09 17:20:12 crc kubenswrapper[4853]: I1209 17:20:12.167389 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cfa5931eebc2ea0a3151d85c9336fd5a99cbb6543746b60bc67b2b911f765a0" Dec 09 17:20:15 crc kubenswrapper[4853]: I1209 17:20:15.195997 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71","Type":"ContainerStarted","Data":"c60364e309e6736cda3e7e71f6e526343044c48b02b3f1629ffd989d821c7558"} Dec 09 17:20:15 crc kubenswrapper[4853]: I1209 17:20:15.199815 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tvtzt" event={"ID":"704f2e28-f375-4a95-a680-87e1bcb93058","Type":"ContainerStarted","Data":"1552801835f6066e074352376905312bad62c3e478b8613784d0612de9f8e602"} Dec 09 17:20:15 crc kubenswrapper[4853]: I1209 17:20:15.254685 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-tvtzt" podStartSLOduration=4.183676426 podStartE2EDuration="13.25458632s" podCreationTimestamp="2025-12-09 17:20:02 +0000 UTC" firstStartedPulling="2025-12-09 17:20:05.777049766 +0000 UTC m=+1432.711788948" lastFinishedPulling="2025-12-09 17:20:14.84795966 +0000 UTC m=+1441.782698842" observedRunningTime="2025-12-09 17:20:15.236997037 +0000 UTC m=+1442.171736209" watchObservedRunningTime="2025-12-09 17:20:15.25458632 +0000 UTC m=+1442.189325502" Dec 09 17:20:16 crc kubenswrapper[4853]: I1209 17:20:16.218154 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"e967cab3797b896235b8a5095de7197b0eb99819736dff52e3e10d6096e57858"} Dec 09 17:20:16 crc kubenswrapper[4853]: I1209 17:20:16.218470 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"6f258c05ce1672a6ab21c1eebc7714675c04448a0bd77b14308bbbeafe63c71f"} Dec 09 17:20:16 crc kubenswrapper[4853]: I1209 17:20:16.218481 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"feae546dadf06578213ce21ed02733f9690fcf551882b7395d31c048d1cf468b"} Dec 09 17:20:16 crc kubenswrapper[4853]: I1209 17:20:16.218489 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"ff8bf6d9a152d0783e5d2bf2bf916e3efa8c38208bee55a418b03072c2f97553"} Dec 09 17:20:16 crc kubenswrapper[4853]: I1209 17:20:16.219657 4853 generic.go:334] "Generic (PLEG): container finished" podID="bfca4d2f-3a00-4f1f-8654-b7ef5333d22f" containerID="5c4d4c5782818a71fc3061056fbff6a0d54f8c8975600bb8c22a43288dbab5a6" exitCode=0 Dec 09 17:20:16 crc kubenswrapper[4853]: I1209 17:20:16.219737 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vqpmd" event={"ID":"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f","Type":"ContainerDied","Data":"5c4d4c5782818a71fc3061056fbff6a0d54f8c8975600bb8c22a43288dbab5a6"} Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.728746 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vqpmd" Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.867493 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-db-sync-config-data\") pod \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.867561 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-combined-ca-bundle\") pod \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.867588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-222gq\" (UniqueName: \"kubernetes.io/projected/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-kube-api-access-222gq\") pod \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.867698 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-config-data\") pod \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\" (UID: \"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f\") " Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.905826 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bfca4d2f-3a00-4f1f-8654-b7ef5333d22f" (UID: "bfca4d2f-3a00-4f1f-8654-b7ef5333d22f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.912946 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-kube-api-access-222gq" (OuterVolumeSpecName: "kube-api-access-222gq") pod "bfca4d2f-3a00-4f1f-8654-b7ef5333d22f" (UID: "bfca4d2f-3a00-4f1f-8654-b7ef5333d22f"). InnerVolumeSpecName "kube-api-access-222gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.973111 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfca4d2f-3a00-4f1f-8654-b7ef5333d22f" (UID: "bfca4d2f-3a00-4f1f-8654-b7ef5333d22f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.977192 4853 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.977231 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:17 crc kubenswrapper[4853]: I1209 17:20:17.977241 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-222gq\" (UniqueName: \"kubernetes.io/projected/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-kube-api-access-222gq\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.027908 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-config-data" (OuterVolumeSpecName: "config-data") pod "bfca4d2f-3a00-4f1f-8654-b7ef5333d22f" (UID: "bfca4d2f-3a00-4f1f-8654-b7ef5333d22f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.079027 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.242272 4853 generic.go:334] "Generic (PLEG): container finished" podID="704f2e28-f375-4a95-a680-87e1bcb93058" containerID="1552801835f6066e074352376905312bad62c3e478b8613784d0612de9f8e602" exitCode=0 Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.242345 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tvtzt" event={"ID":"704f2e28-f375-4a95-a680-87e1bcb93058","Type":"ContainerDied","Data":"1552801835f6066e074352376905312bad62c3e478b8613784d0612de9f8e602"} Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.247571 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vqpmd" event={"ID":"bfca4d2f-3a00-4f1f-8654-b7ef5333d22f","Type":"ContainerDied","Data":"2279c91d7f7abb56d0f4a78823de30913d90437d3178ef79b26a542893f84265"} Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.247623 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2279c91d7f7abb56d0f4a78823de30913d90437d3178ef79b26a542893f84265" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.247674 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vqpmd" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.603063 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hrmsp"] Dec 09 17:20:18 crc kubenswrapper[4853]: E1209 17:20:18.622405 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2e1f43-e047-4825-9457-a3a9bcfba205" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622454 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2e1f43-e047-4825-9457-a3a9bcfba205" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: E1209 17:20:18.622469 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309dcb32-e680-454f-a815-05e689a3f35e" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622475 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="309dcb32-e680-454f-a815-05e689a3f35e" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: E1209 17:20:18.622523 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99cf6f02-4548-4822-8cea-219f8f35db7d" containerName="mariadb-database-create" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622532 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="99cf6f02-4548-4822-8cea-219f8f35db7d" containerName="mariadb-database-create" Dec 09 17:20:18 crc kubenswrapper[4853]: E1209 17:20:18.622544 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4818586d-6c0a-4b51-acf3-51605cd25d5f" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622552 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4818586d-6c0a-4b51-acf3-51605cd25d5f" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: E1209 17:20:18.622562 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea83a61-c2d2-44f7-86a2-fe7279fc4b85" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622571 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea83a61-c2d2-44f7-86a2-fe7279fc4b85" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: E1209 17:20:18.622612 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d" containerName="mariadb-database-create" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622622 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d" containerName="mariadb-database-create" Dec 09 17:20:18 crc kubenswrapper[4853]: E1209 17:20:18.622637 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bdeb0f7-5749-4ef1-baca-b0e6f992c48f" containerName="mariadb-database-create" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622644 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdeb0f7-5749-4ef1-baca-b0e6f992c48f" containerName="mariadb-database-create" Dec 09 17:20:18 crc kubenswrapper[4853]: E1209 17:20:18.622683 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfca4d2f-3a00-4f1f-8654-b7ef5333d22f" containerName="glance-db-sync" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622692 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfca4d2f-3a00-4f1f-8654-b7ef5333d22f" containerName="glance-db-sync" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622926 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="309dcb32-e680-454f-a815-05e689a3f35e" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622939 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfca4d2f-3a00-4f1f-8654-b7ef5333d22f" containerName="glance-db-sync" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622954 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d" containerName="mariadb-database-create" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622963 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="99cf6f02-4548-4822-8cea-219f8f35db7d" containerName="mariadb-database-create" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.622997 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4818586d-6c0a-4b51-acf3-51605cd25d5f" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.623004 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bdeb0f7-5749-4ef1-baca-b0e6f992c48f" containerName="mariadb-database-create" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.623016 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2e1f43-e047-4825-9457-a3a9bcfba205" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.623025 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea83a61-c2d2-44f7-86a2-fe7279fc4b85" containerName="mariadb-account-create-update" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.638000 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hrmsp"] Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.638103 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.793207 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.793313 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-config\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.793349 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.793407 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.793485 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js44r\" (UniqueName: \"kubernetes.io/projected/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-kube-api-access-js44r\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.895271 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.895410 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-config\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.895445 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.895512 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.895539 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js44r\" (UniqueName: \"kubernetes.io/projected/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-kube-api-access-js44r\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.896231 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.896490 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-config\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.898539 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.898633 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.929688 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js44r\" (UniqueName: \"kubernetes.io/projected/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-kube-api-access-js44r\") pod \"dnsmasq-dns-5b946c75cc-hrmsp\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:18 crc kubenswrapper[4853]: I1209 17:20:18.988128 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:19 crc kubenswrapper[4853]: I1209 17:20:19.286077 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"e51f56d36e970eaffac797b8b5a12ee74331361ce1b384474cfc6884064efebb"} Dec 09 17:20:19 crc kubenswrapper[4853]: I1209 17:20:19.286417 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"bc5952ae2aad9eb2555b36e14953754ea5180c0f68094f7e38fdc0cf20114624"} Dec 09 17:20:19 crc kubenswrapper[4853]: I1209 17:20:19.600565 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hrmsp"] Dec 09 17:20:19 crc kubenswrapper[4853]: W1209 17:20:19.646853 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff267351_a3b2_46e9_b67a_17f3fe4e75fd.slice/crio-337d2aee7511ce6f8a0ef8d76944516484a48d9c3e1397daec4a1958f18c15c7 WatchSource:0}: Error finding container 337d2aee7511ce6f8a0ef8d76944516484a48d9c3e1397daec4a1958f18c15c7: Status 404 returned error can't find the container with id 337d2aee7511ce6f8a0ef8d76944516484a48d9c3e1397daec4a1958f18c15c7 Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.017879 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.138240 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-combined-ca-bundle\") pod \"704f2e28-f375-4a95-a680-87e1bcb93058\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.138296 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnhhh\" (UniqueName: \"kubernetes.io/projected/704f2e28-f375-4a95-a680-87e1bcb93058-kube-api-access-bnhhh\") pod \"704f2e28-f375-4a95-a680-87e1bcb93058\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.138556 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-config-data\") pod \"704f2e28-f375-4a95-a680-87e1bcb93058\" (UID: \"704f2e28-f375-4a95-a680-87e1bcb93058\") " Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.298520 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" event={"ID":"ff267351-a3b2-46e9-b67a-17f3fe4e75fd","Type":"ContainerStarted","Data":"337d2aee7511ce6f8a0ef8d76944516484a48d9c3e1397daec4a1958f18c15c7"} Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.304527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"60a38584d7b583fea19b2cf4007c1a99079b31f0e836d0086aed5aa40da44438"} Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.304581 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"737e51c426ded7462a5777ecd08f04d9a1c10d7322fad23ee657bdf795eb40a8"} Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.306397 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tvtzt" event={"ID":"704f2e28-f375-4a95-a680-87e1bcb93058","Type":"ContainerDied","Data":"11c1d762f85474d4c0c44f1a889afae65a382b6d628ef8be5540b3477ff1dd0e"} Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.306431 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c1d762f85474d4c0c44f1a889afae65a382b6d628ef8be5540b3477ff1dd0e" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.306490 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tvtzt" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.311213 4853 generic.go:334] "Generic (PLEG): container finished" podID="c5edba71-6b69-4f76-9dde-ed6c7a7ecb71" containerID="c60364e309e6736cda3e7e71f6e526343044c48b02b3f1629ffd989d821c7558" exitCode=0 Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.311265 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71","Type":"ContainerDied","Data":"c60364e309e6736cda3e7e71f6e526343044c48b02b3f1629ffd989d821c7558"} Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.347087 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/704f2e28-f375-4a95-a680-87e1bcb93058-kube-api-access-bnhhh" (OuterVolumeSpecName: "kube-api-access-bnhhh") pod "704f2e28-f375-4a95-a680-87e1bcb93058" (UID: "704f2e28-f375-4a95-a680-87e1bcb93058"). InnerVolumeSpecName "kube-api-access-bnhhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.348168 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnhhh\" (UniqueName: \"kubernetes.io/projected/704f2e28-f375-4a95-a680-87e1bcb93058-kube-api-access-bnhhh\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.399481 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "704f2e28-f375-4a95-a680-87e1bcb93058" (UID: "704f2e28-f375-4a95-a680-87e1bcb93058"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.452531 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.586872 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lk4j5"] Dec 09 17:20:20 crc kubenswrapper[4853]: E1209 17:20:20.587476 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704f2e28-f375-4a95-a680-87e1bcb93058" containerName="keystone-db-sync" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.587493 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="704f2e28-f375-4a95-a680-87e1bcb93058" containerName="keystone-db-sync" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.587798 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="704f2e28-f375-4a95-a680-87e1bcb93058" containerName="keystone-db-sync" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.588684 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.604767 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.653421 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lk4j5"] Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.675064 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stldv\" (UniqueName: \"kubernetes.io/projected/dbb4a5d2-dddd-4567-846c-2f81a64049f6-kube-api-access-stldv\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.675276 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-fernet-keys\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.675379 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-config-data\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.675417 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-credential-keys\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.675464 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-scripts\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.675745 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-combined-ca-bundle\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.686410 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hrmsp"] Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.729365 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784f69c749-xpz6j"] Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.742847 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.778279 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-k5v9z"] Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.779753 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.779835 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-combined-ca-bundle\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.779925 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stldv\" (UniqueName: \"kubernetes.io/projected/dbb4a5d2-dddd-4567-846c-2f81a64049f6-kube-api-access-stldv\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.779994 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-fernet-keys\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.780039 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-config-data\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.780061 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-credential-keys\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.780085 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-scripts\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.796637 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.796854 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.797337 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.797991 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-tdgvr" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.823610 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-combined-ca-bundle\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.843241 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-credential-keys\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.843290 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-fernet-keys\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.843482 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-scripts\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.847970 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-config-data\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.852822 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-xpz6j"] Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.868335 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stldv\" (UniqueName: \"kubernetes.io/projected/dbb4a5d2-dddd-4567-846c-2f81a64049f6-kube-api-access-stldv\") pod \"keystone-bootstrap-lk4j5\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.883945 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-combined-ca-bundle\") pod \"heat-db-sync-k5v9z\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.884041 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-sb\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.884087 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-config-data\") pod \"heat-db-sync-k5v9z\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.884112 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvhxj\" (UniqueName: \"kubernetes.io/projected/9a469b8b-0a53-4628-a940-d589b48baae6-kube-api-access-dvhxj\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.884132 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-config\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.884158 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-dns-svc\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.884204 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjtb4\" (UniqueName: \"kubernetes.io/projected/20655566-5ed0-4732-835a-0bd04a51988f-kube-api-access-qjtb4\") pod \"heat-db-sync-k5v9z\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.884228 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-nb\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.932003 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-k5v9z"] Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.992507 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-nb\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.992768 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-combined-ca-bundle\") pod \"heat-db-sync-k5v9z\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.992943 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-sb\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.993055 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-config-data\") pod \"heat-db-sync-k5v9z\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.993105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvhxj\" (UniqueName: \"kubernetes.io/projected/9a469b8b-0a53-4628-a940-d589b48baae6-kube-api-access-dvhxj\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.993146 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-config\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.993199 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-dns-svc\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.993295 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjtb4\" (UniqueName: \"kubernetes.io/projected/20655566-5ed0-4732-835a-0bd04a51988f-kube-api-access-qjtb4\") pod \"heat-db-sync-k5v9z\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:20 crc kubenswrapper[4853]: I1209 17:20:20.994375 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4pvgk"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.000178 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-config\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.001192 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-dns-svc\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.001928 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.002354 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-nb\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.003133 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-sb\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.017452 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-config-data\") pod \"heat-db-sync-k5v9z\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.033666 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.035803 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.036060 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lrpt2" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.048149 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-combined-ca-bundle\") pod \"heat-db-sync-k5v9z\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.073912 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvhxj\" (UniqueName: \"kubernetes.io/projected/9a469b8b-0a53-4628-a940-d589b48baae6-kube-api-access-dvhxj\") pod \"dnsmasq-dns-784f69c749-xpz6j\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.076459 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjtb4\" (UniqueName: \"kubernetes.io/projected/20655566-5ed0-4732-835a-0bd04a51988f-kube-api-access-qjtb4\") pod \"heat-db-sync-k5v9z\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.091167 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4pvgk"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.100900 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-combined-ca-bundle\") pod \"neutron-db-sync-4pvgk\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.100970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwh9p\" (UniqueName: \"kubernetes.io/projected/b9742aa9-091a-499a-8fa7-49295b5e9488-kube-api-access-nwh9p\") pod \"neutron-db-sync-4pvgk\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.101010 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-config\") pod \"neutron-db-sync-4pvgk\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.104129 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-config-data" (OuterVolumeSpecName: "config-data") pod "704f2e28-f375-4a95-a680-87e1bcb93058" (UID: "704f2e28-f375-4a95-a680-87e1bcb93058"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.149316 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tpt7s"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.158129 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.160433 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wj6zm" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.160635 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.164409 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.205825 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-combined-ca-bundle\") pod \"neutron-db-sync-4pvgk\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.205979 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwh9p\" (UniqueName: \"kubernetes.io/projected/b9742aa9-091a-499a-8fa7-49295b5e9488-kube-api-access-nwh9p\") pod \"neutron-db-sync-4pvgk\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.206027 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-config\") pod \"neutron-db-sync-4pvgk\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.206132 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704f2e28-f375-4a95-a680-87e1bcb93058-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.216359 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-config\") pod \"neutron-db-sync-4pvgk\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.216434 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tpt7s"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.219400 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-combined-ca-bundle\") pod \"neutron-db-sync-4pvgk\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.265237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwh9p\" (UniqueName: \"kubernetes.io/projected/b9742aa9-091a-499a-8fa7-49295b5e9488-kube-api-access-nwh9p\") pod \"neutron-db-sync-4pvgk\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.286344 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-xpz6j"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.297099 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-dlmgn"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.298788 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.308427 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-scripts\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.308517 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-config-data\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.308567 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-combined-ca-bundle\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.308635 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18c4cb93-d59f-4160-9e4d-506184f49afe-etc-machine-id\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.308710 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-db-sync-config-data\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.308757 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbr7q\" (UniqueName: \"kubernetes.io/projected/18c4cb93-d59f-4160-9e4d-506184f49afe-kube-api-access-kbr7q\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.351835 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-dlmgn"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.363532 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-696ml"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.365010 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.368202 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.368789 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.368920 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fnggj" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.391904 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-696ml"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.405173 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"40a3c1c2ad7339843a2b929256452dc36b95d10988fe9e9c6d4626c14d5b335f"} Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.410855 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-db-sync-config-data\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.410906 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbr7q\" (UniqueName: \"kubernetes.io/projected/18c4cb93-d59f-4160-9e4d-506184f49afe-kube-api-access-kbr7q\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.410947 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-scripts\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.410995 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-config-data\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.411029 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.411050 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-combined-ca-bundle\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.411071 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-config\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.411097 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxfp5\" (UniqueName: \"kubernetes.io/projected/756e7a63-0b67-4fbf-b7e6-62f06f087a42-kube-api-access-lxfp5\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.411121 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-dns-svc\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.411142 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18c4cb93-d59f-4160-9e4d-506184f49afe-etc-machine-id\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.411165 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: E1209 17:20:21.412412 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod704f2e28_f375_4a95_a680_87e1bcb93058.slice/crio-11c1d762f85474d4c0c44f1a889afae65a382b6d628ef8be5540b3477ff1dd0e\": RecentStats: unable to find data in memory cache]" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.412443 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.414009 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18c4cb93-d59f-4160-9e4d-506184f49afe-etc-machine-id\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.423419 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-combined-ca-bundle\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.425286 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-config-data\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.427811 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fs9xd"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.448709 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-scripts\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.448935 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.449557 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-db-sync-config-data\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.461340 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.472531 4853 generic.go:334] "Generic (PLEG): container finished" podID="ff267351-a3b2-46e9-b67a-17f3fe4e75fd" containerID="e8e6492ac843b4485f0e15073489726ac587af0a5827ae93a608ae7ca6cb6350" exitCode=0 Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.472904 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" event={"ID":"ff267351-a3b2-46e9-b67a-17f3fe4e75fd","Type":"ContainerDied","Data":"e8e6492ac843b4485f0e15073489726ac587af0a5827ae93a608ae7ca6cb6350"} Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.484415 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5lvjr" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.509827 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k5v9z" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.528463 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbr7q\" (UniqueName: \"kubernetes.io/projected/18c4cb93-d59f-4160-9e4d-506184f49afe-kube-api-access-kbr7q\") pod \"cinder-db-sync-tpt7s\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.539108 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.552329 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-combined-ca-bundle\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.552386 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-combined-ca-bundle\") pod \"barbican-db-sync-fs9xd\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.552424 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.552461 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfl48\" (UniqueName: \"kubernetes.io/projected/03b7745a-2365-4bf7-951f-2faa6a046b18-kube-api-access-rfl48\") pod \"barbican-db-sync-fs9xd\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.552489 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-config\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.552537 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxfp5\" (UniqueName: \"kubernetes.io/projected/756e7a63-0b67-4fbf-b7e6-62f06f087a42-kube-api-access-lxfp5\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.552611 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-dns-svc\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.554985 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.555118 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b86b595d-63e4-41f1-979f-4a82cc01b136-logs\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.555238 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-config-data\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.555355 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-db-sync-config-data\") pod \"barbican-db-sync-fs9xd\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.555405 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt9zp\" (UniqueName: \"kubernetes.io/projected/b86b595d-63e4-41f1-979f-4a82cc01b136-kube-api-access-bt9zp\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.555480 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-scripts\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.556231 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.556844 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.557228 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-config\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.559052 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-dns-svc\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.565370 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fs9xd"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.574587 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.585709 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.591461 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxfp5\" (UniqueName: \"kubernetes.io/projected/756e7a63-0b67-4fbf-b7e6-62f06f087a42-kube-api-access-lxfp5\") pod \"dnsmasq-dns-f84976bdf-dlmgn\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.633746 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.646474 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.651809 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.651966 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.685653 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-config-data\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.685763 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-db-sync-config-data\") pod \"barbican-db-sync-fs9xd\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.685808 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt9zp\" (UniqueName: \"kubernetes.io/projected/b86b595d-63e4-41f1-979f-4a82cc01b136-kube-api-access-bt9zp\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.685855 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-scripts\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.685962 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-combined-ca-bundle\") pod \"barbican-db-sync-fs9xd\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.685984 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-combined-ca-bundle\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.686010 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfl48\" (UniqueName: \"kubernetes.io/projected/03b7745a-2365-4bf7-951f-2faa6a046b18-kube-api-access-rfl48\") pod \"barbican-db-sync-fs9xd\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.686128 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b86b595d-63e4-41f1-979f-4a82cc01b136-logs\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.693430 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b86b595d-63e4-41f1-979f-4a82cc01b136-logs\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.695681 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-config-data\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.699361 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-combined-ca-bundle\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.702863 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-db-sync-config-data\") pod \"barbican-db-sync-fs9xd\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.703167 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-scripts\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.718858 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt9zp\" (UniqueName: \"kubernetes.io/projected/b86b595d-63e4-41f1-979f-4a82cc01b136-kube-api-access-bt9zp\") pod \"placement-db-sync-696ml\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.724683 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfl48\" (UniqueName: \"kubernetes.io/projected/03b7745a-2365-4bf7-951f-2faa6a046b18-kube-api-access-rfl48\") pod \"barbican-db-sync-fs9xd\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.733672 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.756159 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-combined-ca-bundle\") pod \"barbican-db-sync-fs9xd\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.788155 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwbj\" (UniqueName: \"kubernetes.io/projected/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-kube-api-access-rgwbj\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.788215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-log-httpd\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.788243 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-config-data\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.788274 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-scripts\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.788293 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-run-httpd\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.788354 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.788402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.882380 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.890380 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwbj\" (UniqueName: \"kubernetes.io/projected/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-kube-api-access-rgwbj\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.890780 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-log-httpd\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.891399 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-log-httpd\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.893380 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-696ml" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.897272 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-config-data\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.897790 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-scripts\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.897827 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-run-httpd\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.897934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.898015 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.898806 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-run-httpd\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.904136 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.904766 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.907845 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-config-data\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.929262 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.938138 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.940895 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.946056 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-scripts\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.949093 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.949296 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.950826 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwbj\" (UniqueName: \"kubernetes.io/projected/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-kube-api-access-rgwbj\") pod \"ceilometer-0\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.958514 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2xs4c" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.960476 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:20:21 crc kubenswrapper[4853]: I1209 17:20:21.994812 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.026217 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.028123 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.035157 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.069716 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.120974 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121045 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121068 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121087 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121104 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqf52\" (UniqueName: \"kubernetes.io/projected/76121d5b-66a5-40ee-a70e-7bca2c6151e4-kube-api-access-mqf52\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121122 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121140 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-logs\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121153 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-scripts\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121181 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-logs\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121205 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121254 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121272 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-config-data\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121314 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxhht\" (UniqueName: \"kubernetes.io/projected/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-kube-api-access-bxhht\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.121357 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224232 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-logs\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224290 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224354 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224381 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-config-data\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224448 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxhht\" (UniqueName: \"kubernetes.io/projected/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-kube-api-access-bxhht\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224512 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224627 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224662 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224693 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224694 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-logs\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224714 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqf52\" (UniqueName: \"kubernetes.io/projected/76121d5b-66a5-40ee-a70e-7bca2c6151e4-kube-api-access-mqf52\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224774 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224799 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-logs\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.224821 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-scripts\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.231527 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.232728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.232910 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.233507 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.234085 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-logs\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.246023 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.252553 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqf52\" (UniqueName: \"kubernetes.io/projected/76121d5b-66a5-40ee-a70e-7bca2c6151e4-kube-api-access-mqf52\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.252778 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-scripts\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.252920 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.253908 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-config-data\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.257103 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.264140 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.279480 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxhht\" (UniqueName: \"kubernetes.io/projected/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-kube-api-access-bxhht\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.297329 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.366426 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.369393 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.429487 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-dns-svc\") pod \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.429565 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js44r\" (UniqueName: \"kubernetes.io/projected/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-kube-api-access-js44r\") pod \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.429590 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-config\") pod \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.429638 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-nb\") pod \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.429722 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-sb\") pod \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\" (UID: \"ff267351-a3b2-46e9-b67a-17f3fe4e75fd\") " Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.432651 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.465785 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-kube-api-access-js44r" (OuterVolumeSpecName: "kube-api-access-js44r") pod "ff267351-a3b2-46e9-b67a-17f3fe4e75fd" (UID: "ff267351-a3b2-46e9-b67a-17f3fe4e75fd"). InnerVolumeSpecName "kube-api-access-js44r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.493976 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff267351-a3b2-46e9-b67a-17f3fe4e75fd" (UID: "ff267351-a3b2-46e9-b67a-17f3fe4e75fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.497116 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff267351-a3b2-46e9-b67a-17f3fe4e75fd" (UID: "ff267351-a3b2-46e9-b67a-17f3fe4e75fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.502226 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-config" (OuterVolumeSpecName: "config") pod "ff267351-a3b2-46e9-b67a-17f3fe4e75fd" (UID: "ff267351-a3b2-46e9-b67a-17f3fe4e75fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.505488 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71","Type":"ContainerStarted","Data":"ca9915365178945ce8cf16d55a9e1105e1c96a21f204ae8473757e24426214ee"} Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.507025 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" event={"ID":"ff267351-a3b2-46e9-b67a-17f3fe4e75fd","Type":"ContainerDied","Data":"337d2aee7511ce6f8a0ef8d76944516484a48d9c3e1397daec4a1958f18c15c7"} Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.507055 4853 scope.go:117] "RemoveContainer" containerID="e8e6492ac843b4485f0e15073489726ac587af0a5827ae93a608ae7ca6cb6350" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.507171 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-hrmsp" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.507981 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff267351-a3b2-46e9-b67a-17f3fe4e75fd" (UID: "ff267351-a3b2-46e9-b67a-17f3fe4e75fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.516889 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"f27ad317b8885e1d7b9412541255d471bc491b49da84d378b4e4649a8203ec4c"} Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.532833 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.532862 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js44r\" (UniqueName: \"kubernetes.io/projected/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-kube-api-access-js44r\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.532875 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.532886 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.532895 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff267351-a3b2-46e9-b67a-17f3fe4e75fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.577246 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.875445 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hrmsp"] Dec 09 17:20:22 crc kubenswrapper[4853]: I1209 17:20:22.888390 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-hrmsp"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.169754 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lk4j5"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.184167 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-xpz6j"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.232711 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-k5v9z"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.477427 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-dlmgn"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.497779 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4pvgk"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.510612 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fs9xd"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.524689 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tpt7s"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.544333 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2f6e868f-f4bc-42d3-bbe6-2a391e2b768d","Type":"ContainerStarted","Data":"b93dbe45a9c430266f28befd28e59e4621825dcaaf07f5e853325dc98cf68fb7"} Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.553130 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-xpz6j" event={"ID":"9a469b8b-0a53-4628-a940-d589b48baae6","Type":"ContainerStarted","Data":"dfdcc2c55ac12c1c370dd80f15f02c77ac0300b27127f38cc1cd8e01d38cc187"} Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.617422 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.185381225 podStartE2EDuration="57.617397258s" podCreationTimestamp="2025-12-09 17:19:26 +0000 UTC" firstStartedPulling="2025-12-09 17:20:06.007773483 +0000 UTC m=+1432.942512675" lastFinishedPulling="2025-12-09 17:20:18.439789526 +0000 UTC m=+1445.374528708" observedRunningTime="2025-12-09 17:20:23.598385706 +0000 UTC m=+1450.533124888" watchObservedRunningTime="2025-12-09 17:20:23.617397258 +0000 UTC m=+1450.552136440" Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.642615 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff267351-a3b2-46e9-b67a-17f3fe4e75fd" path="/var/lib/kubelet/pods/ff267351-a3b2-46e9-b67a-17f3fe4e75fd/volumes" Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.643529 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k5v9z" event={"ID":"20655566-5ed0-4732-835a-0bd04a51988f","Type":"ContainerStarted","Data":"0f1bd4614592922de079ab5e3030d2c5d491f6f98ad36f1acbb785fb46c77f95"} Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.643567 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4pvgk" event={"ID":"b9742aa9-091a-499a-8fa7-49295b5e9488","Type":"ContainerStarted","Data":"0f35cccc99c5101a131beb575c96c65c825ea4ce6e44de83ce78fa1acc129201"} Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.643579 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lk4j5" event={"ID":"dbb4a5d2-dddd-4567-846c-2f81a64049f6","Type":"ContainerStarted","Data":"273ae80f7e9e4ce4d54501fc2be5090d15ab6b79d4b234ffbe822aa1afa37bd0"} Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.643589 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" event={"ID":"756e7a63-0b67-4fbf-b7e6-62f06f087a42","Type":"ContainerStarted","Data":"a2d04c90367cff80983d3530afa8060ae01efc30fa98f5de3ba6b48bb73045a9"} Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.643614 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fs9xd" event={"ID":"03b7745a-2365-4bf7-951f-2faa6a046b18","Type":"ContainerStarted","Data":"b6f9a3e0e27a842c69a389a8ad68adbb0528344fb76591bc19fcb879d09a5742"} Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.696818 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-696ml"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.909544 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.943242 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-dlmgn"] Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.990050 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-99tqh"] Dec 09 17:20:23 crc kubenswrapper[4853]: E1209 17:20:23.999683 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff267351-a3b2-46e9-b67a-17f3fe4e75fd" containerName="init" Dec 09 17:20:23 crc kubenswrapper[4853]: I1209 17:20:23.999716 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff267351-a3b2-46e9-b67a-17f3fe4e75fd" containerName="init" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.001222 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff267351-a3b2-46e9-b67a-17f3fe4e75fd" containerName="init" Dec 09 17:20:24 crc kubenswrapper[4853]: W1209 17:20:24.057849 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cb5f280_f2fd_425a_adc9_58ef46c2afa1.slice/crio-2572d101d0427de44327dcb42b8bdeb8889c767774a0e3db20cfbf352219e8a0 WatchSource:0}: Error finding container 2572d101d0427de44327dcb42b8bdeb8889c767774a0e3db20cfbf352219e8a0: Status 404 returned error can't find the container with id 2572d101d0427de44327dcb42b8bdeb8889c767774a0e3db20cfbf352219e8a0 Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.061630 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.061763 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.070370 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.071872 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-99tqh"] Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.091139 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.203650 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.256108 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.256162 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.256190 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/d2bb9bb7-97a8-42c8-b690-5e697af56654-kube-api-access-76fj8\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.256245 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.256302 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.256366 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-config\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.359075 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-config\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.359172 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.359208 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.359232 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/d2bb9bb7-97a8-42c8-b690-5e697af56654-kube-api-access-76fj8\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.359276 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.359331 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.360256 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-config\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.360259 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.360366 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.360866 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.360903 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.418252 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.424769 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/d2bb9bb7-97a8-42c8-b690-5e697af56654-kube-api-access-76fj8\") pod \"dnsmasq-dns-785d8bcb8c-99tqh\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.643268 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tpt7s" event={"ID":"18c4cb93-d59f-4160-9e4d-506184f49afe","Type":"ContainerStarted","Data":"f9951be682dfe5c8174effe7a6476dbb00c98c1ec26525616fdc265358037e03"} Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.650124 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cb5f280-f2fd-425a-adc9-58ef46c2afa1","Type":"ContainerStarted","Data":"2572d101d0427de44327dcb42b8bdeb8889c767774a0e3db20cfbf352219e8a0"} Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.651978 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-696ml" event={"ID":"b86b595d-63e4-41f1-979f-4a82cc01b136","Type":"ContainerStarted","Data":"00125784a8f8efda05b0e4bd5d6876a02b9b6243c54413a54ff2c66cb9de5fdf"} Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.664258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76121d5b-66a5-40ee-a70e-7bca2c6151e4","Type":"ContainerStarted","Data":"3deb0c7c347004228d7e7bb7df7525c3e7f433aabd622746ab7a8e49bb820310"} Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.691283 4853 generic.go:334] "Generic (PLEG): container finished" podID="9a469b8b-0a53-4628-a940-d589b48baae6" containerID="711b874403c75309801464cd3585bf230e15aa077c97fbb69a5cfa6b4b0e48b7" exitCode=0 Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.692072 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-xpz6j" event={"ID":"9a469b8b-0a53-4628-a940-d589b48baae6","Type":"ContainerDied","Data":"711b874403c75309801464cd3585bf230e15aa077c97fbb69a5cfa6b4b0e48b7"} Dec 09 17:20:24 crc kubenswrapper[4853]: I1209 17:20:24.719225 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:25 crc kubenswrapper[4853]: I1209 17:20:25.012189 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:20:25 crc kubenswrapper[4853]: I1209 17:20:25.401996 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-99tqh"] Dec 09 17:20:25 crc kubenswrapper[4853]: I1209 17:20:25.721433 4853 generic.go:334] "Generic (PLEG): container finished" podID="756e7a63-0b67-4fbf-b7e6-62f06f087a42" containerID="6fb68ac21e07adb4dcc4d1d5f2d14e40c0f2a5ec41af867cf1143f24fc4e0349" exitCode=0 Dec 09 17:20:25 crc kubenswrapper[4853]: I1209 17:20:25.721498 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" event={"ID":"756e7a63-0b67-4fbf-b7e6-62f06f087a42","Type":"ContainerDied","Data":"6fb68ac21e07adb4dcc4d1d5f2d14e40c0f2a5ec41af867cf1143f24fc4e0349"} Dec 09 17:20:25 crc kubenswrapper[4853]: I1209 17:20:25.767268 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617","Type":"ContainerStarted","Data":"912baa2fa3878c9110911723fced2c56733760a205b354beaf97a1153e0dc619"} Dec 09 17:20:25 crc kubenswrapper[4853]: I1209 17:20:25.778537 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" event={"ID":"d2bb9bb7-97a8-42c8-b690-5e697af56654","Type":"ContainerStarted","Data":"1f335cbaa99e2f78a051975468c943e27344796ae4650dc58f4edfbb3c227615"} Dec 09 17:20:25 crc kubenswrapper[4853]: I1209 17:20:25.781070 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-xpz6j" event={"ID":"9a469b8b-0a53-4628-a940-d589b48baae6","Type":"ContainerDied","Data":"dfdcc2c55ac12c1c370dd80f15f02c77ac0300b27127f38cc1cd8e01d38cc187"} Dec 09 17:20:25 crc kubenswrapper[4853]: I1209 17:20:25.781122 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfdcc2c55ac12c1c370dd80f15f02c77ac0300b27127f38cc1cd8e01d38cc187" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.403799 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.423417 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-dns-svc\") pod \"9a469b8b-0a53-4628-a940-d589b48baae6\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.423509 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-sb\") pod \"9a469b8b-0a53-4628-a940-d589b48baae6\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.423535 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-config\") pod \"9a469b8b-0a53-4628-a940-d589b48baae6\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.423637 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-nb\") pod \"9a469b8b-0a53-4628-a940-d589b48baae6\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.423682 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvhxj\" (UniqueName: \"kubernetes.io/projected/9a469b8b-0a53-4628-a940-d589b48baae6-kube-api-access-dvhxj\") pod \"9a469b8b-0a53-4628-a940-d589b48baae6\" (UID: \"9a469b8b-0a53-4628-a940-d589b48baae6\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.494968 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a469b8b-0a53-4628-a940-d589b48baae6-kube-api-access-dvhxj" (OuterVolumeSpecName: "kube-api-access-dvhxj") pod "9a469b8b-0a53-4628-a940-d589b48baae6" (UID: "9a469b8b-0a53-4628-a940-d589b48baae6"). InnerVolumeSpecName "kube-api-access-dvhxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.524939 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a469b8b-0a53-4628-a940-d589b48baae6" (UID: "9a469b8b-0a53-4628-a940-d589b48baae6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.527329 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.527347 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvhxj\" (UniqueName: \"kubernetes.io/projected/9a469b8b-0a53-4628-a940-d589b48baae6-kube-api-access-dvhxj\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.532430 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-config" (OuterVolumeSpecName: "config") pod "9a469b8b-0a53-4628-a940-d589b48baae6" (UID: "9a469b8b-0a53-4628-a940-d589b48baae6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.546938 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a469b8b-0a53-4628-a940-d589b48baae6" (UID: "9a469b8b-0a53-4628-a940-d589b48baae6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.558447 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9a469b8b-0a53-4628-a940-d589b48baae6" (UID: "9a469b8b-0a53-4628-a940-d589b48baae6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.629612 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.647543 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.647572 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.647581 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a469b8b-0a53-4628-a940-d589b48baae6-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.749146 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxfp5\" (UniqueName: \"kubernetes.io/projected/756e7a63-0b67-4fbf-b7e6-62f06f087a42-kube-api-access-lxfp5\") pod \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.749424 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-config\") pod \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.749450 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-nb\") pod \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.749604 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-sb\") pod \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.749654 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-dns-svc\") pod \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\" (UID: \"756e7a63-0b67-4fbf-b7e6-62f06f087a42\") " Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.783278 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "756e7a63-0b67-4fbf-b7e6-62f06f087a42" (UID: "756e7a63-0b67-4fbf-b7e6-62f06f087a42"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.792992 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756e7a63-0b67-4fbf-b7e6-62f06f087a42-kube-api-access-lxfp5" (OuterVolumeSpecName: "kube-api-access-lxfp5") pod "756e7a63-0b67-4fbf-b7e6-62f06f087a42" (UID: "756e7a63-0b67-4fbf-b7e6-62f06f087a42"). InnerVolumeSpecName "kube-api-access-lxfp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.836729 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-config" (OuterVolumeSpecName: "config") pod "756e7a63-0b67-4fbf-b7e6-62f06f087a42" (UID: "756e7a63-0b67-4fbf-b7e6-62f06f087a42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.837190 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "756e7a63-0b67-4fbf-b7e6-62f06f087a42" (UID: "756e7a63-0b67-4fbf-b7e6-62f06f087a42"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.841992 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" event={"ID":"756e7a63-0b67-4fbf-b7e6-62f06f087a42","Type":"ContainerDied","Data":"a2d04c90367cff80983d3530afa8060ae01efc30fa98f5de3ba6b48bb73045a9"} Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.842047 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-dlmgn" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.842063 4853 scope.go:117] "RemoveContainer" containerID="6fb68ac21e07adb4dcc4d1d5f2d14e40c0f2a5ec41af867cf1143f24fc4e0349" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.846283 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-xpz6j" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.848872 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71","Type":"ContainerStarted","Data":"06004700e5ed539783a08d294490bed6b14e68dbb086b69305d6748eaaccefbe"} Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.854120 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxfp5\" (UniqueName: \"kubernetes.io/projected/756e7a63-0b67-4fbf-b7e6-62f06f087a42-kube-api-access-lxfp5\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.854148 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.854160 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.854168 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:26 crc kubenswrapper[4853]: I1209 17:20:26.907106 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "756e7a63-0b67-4fbf-b7e6-62f06f087a42" (UID: "756e7a63-0b67-4fbf-b7e6-62f06f087a42"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:26.980198 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/756e7a63-0b67-4fbf-b7e6-62f06f087a42-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.043529 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-xpz6j"] Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.058701 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-xpz6j"] Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.224794 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-dlmgn"] Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.241900 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-dlmgn"] Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.585155 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756e7a63-0b67-4fbf-b7e6-62f06f087a42" path="/var/lib/kubelet/pods/756e7a63-0b67-4fbf-b7e6-62f06f087a42/volumes" Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.586134 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a469b8b-0a53-4628-a940-d589b48baae6" path="/var/lib/kubelet/pods/9a469b8b-0a53-4628-a940-d589b48baae6/volumes" Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.879756 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5edba71-6b69-4f76-9dde-ed6c7a7ecb71","Type":"ContainerStarted","Data":"2b17be4624ff34dd566fab3877f43ffdb89c8161227eae5c6da9cb59762ee56a"} Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.887417 4853 generic.go:334] "Generic (PLEG): container finished" podID="d2bb9bb7-97a8-42c8-b690-5e697af56654" containerID="fad4c4849f5302e7fb69e01d4bb4dbb5c9e97b47d24a7966c5311073534a158f" exitCode=0 Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.887483 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" event={"ID":"d2bb9bb7-97a8-42c8-b690-5e697af56654","Type":"ContainerDied","Data":"fad4c4849f5302e7fb69e01d4bb4dbb5c9e97b47d24a7966c5311073534a158f"} Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.906576 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lk4j5" event={"ID":"dbb4a5d2-dddd-4567-846c-2f81a64049f6","Type":"ContainerStarted","Data":"34d351e1137aebe3693894cce3ec63de6298d7392d97324cd32e5a56305557f3"} Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.913583 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4pvgk" event={"ID":"b9742aa9-091a-499a-8fa7-49295b5e9488","Type":"ContainerStarted","Data":"782ea6c9cdd1d1202ff5bd634bd800810fe1aa8d95ac496fb687f6d92285cbe8"} Dec 09 17:20:27 crc kubenswrapper[4853]: I1209 17:20:27.981742 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.981719731 podStartE2EDuration="19.981719731s" podCreationTimestamp="2025-12-09 17:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:27.945817076 +0000 UTC m=+1454.880556268" watchObservedRunningTime="2025-12-09 17:20:27.981719731 +0000 UTC m=+1454.916458923" Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.041292 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lk4j5" podStartSLOduration=8.041266107 podStartE2EDuration="8.041266107s" podCreationTimestamp="2025-12-09 17:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:28.015879317 +0000 UTC m=+1454.950618499" watchObservedRunningTime="2025-12-09 17:20:28.041266107 +0000 UTC m=+1454.976005289" Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.063171 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4pvgk" podStartSLOduration=8.06314774 podStartE2EDuration="8.06314774s" podCreationTimestamp="2025-12-09 17:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:28.045784804 +0000 UTC m=+1454.980524006" watchObservedRunningTime="2025-12-09 17:20:28.06314774 +0000 UTC m=+1454.997886922" Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.593643 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.594826 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.594907 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.596335 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6f987a0c43c35d8870f761c6c8a9e4bd42afed53db05f41b90af0f3121049ce"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.596422 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://f6f987a0c43c35d8870f761c6c8a9e4bd42afed53db05f41b90af0f3121049ce" gracePeriod=600 Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.938711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617","Type":"ContainerStarted","Data":"edfdd15f80661202229bbe80c3a9ba0548cfc43ad196668343c4e3727bc1e9e3"} Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.957291 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" event={"ID":"d2bb9bb7-97a8-42c8-b690-5e697af56654","Type":"ContainerStarted","Data":"39b40ecdfa5cbf99a4947871eed47955adca53cabd600ea5f42470db4f4db90d"} Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.957731 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.964401 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="f6f987a0c43c35d8870f761c6c8a9e4bd42afed53db05f41b90af0f3121049ce" exitCode=0 Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.964472 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"f6f987a0c43c35d8870f761c6c8a9e4bd42afed53db05f41b90af0f3121049ce"} Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.964511 4853 scope.go:117] "RemoveContainer" containerID="ff410bbb47eb0d8e5f80ec7cc8ea558647698f94ee7441c8421ab12f2216ccf7" Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.968103 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76121d5b-66a5-40ee-a70e-7bca2c6151e4","Type":"ContainerStarted","Data":"3b022869b48ce0100514544d4d80ffaccc6b6cba9ffe9cf7b16251f05faf53af"} Dec 09 17:20:28 crc kubenswrapper[4853]: I1209 17:20:28.984348 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" podStartSLOduration=5.984325699 podStartE2EDuration="5.984325699s" podCreationTimestamp="2025-12-09 17:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:28.983380572 +0000 UTC m=+1455.918119754" watchObservedRunningTime="2025-12-09 17:20:28.984325699 +0000 UTC m=+1455.919064881" Dec 09 17:20:29 crc kubenswrapper[4853]: I1209 17:20:29.070650 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:29 crc kubenswrapper[4853]: I1209 17:20:29.996788 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740"} Dec 09 17:20:30 crc kubenswrapper[4853]: I1209 17:20:30.016982 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76121d5b-66a5-40ee-a70e-7bca2c6151e4","Type":"ContainerStarted","Data":"b2d9a7738ad9337273c9431ab6dda07d916ec1ddebd4442ebe36d72a27f2309b"} Dec 09 17:20:30 crc kubenswrapper[4853]: I1209 17:20:30.017240 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerName="glance-log" containerID="cri-o://3b022869b48ce0100514544d4d80ffaccc6b6cba9ffe9cf7b16251f05faf53af" gracePeriod=30 Dec 09 17:20:30 crc kubenswrapper[4853]: I1209 17:20:30.017682 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerName="glance-httpd" containerID="cri-o://b2d9a7738ad9337273c9431ab6dda07d916ec1ddebd4442ebe36d72a27f2309b" gracePeriod=30 Dec 09 17:20:30 crc kubenswrapper[4853]: I1209 17:20:30.037553 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617","Type":"ContainerStarted","Data":"7f3870d42c927549d321ac5f68893c3f30a021c1dae2d921b7f5551047473dbd"} Dec 09 17:20:30 crc kubenswrapper[4853]: I1209 17:20:30.045813 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerName="glance-log" containerID="cri-o://edfdd15f80661202229bbe80c3a9ba0548cfc43ad196668343c4e3727bc1e9e3" gracePeriod=30 Dec 09 17:20:30 crc kubenswrapper[4853]: I1209 17:20:30.046388 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerName="glance-httpd" containerID="cri-o://7f3870d42c927549d321ac5f68893c3f30a021c1dae2d921b7f5551047473dbd" gracePeriod=30 Dec 09 17:20:30 crc kubenswrapper[4853]: I1209 17:20:30.091055 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.091024298 podStartE2EDuration="10.091024298s" podCreationTimestamp="2025-12-09 17:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:30.062465758 +0000 UTC m=+1456.997204940" watchObservedRunningTime="2025-12-09 17:20:30.091024298 +0000 UTC m=+1457.025763480" Dec 09 17:20:30 crc kubenswrapper[4853]: I1209 17:20:30.111450 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.111429059 podStartE2EDuration="10.111429059s" podCreationTimestamp="2025-12-09 17:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:30.08858007 +0000 UTC m=+1457.023319252" watchObservedRunningTime="2025-12-09 17:20:30.111429059 +0000 UTC m=+1457.046168241" Dec 09 17:20:31 crc kubenswrapper[4853]: I1209 17:20:31.066196 4853 generic.go:334] "Generic (PLEG): container finished" podID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerID="b2d9a7738ad9337273c9431ab6dda07d916ec1ddebd4442ebe36d72a27f2309b" exitCode=0 Dec 09 17:20:31 crc kubenswrapper[4853]: I1209 17:20:31.066792 4853 generic.go:334] "Generic (PLEG): container finished" podID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerID="3b022869b48ce0100514544d4d80ffaccc6b6cba9ffe9cf7b16251f05faf53af" exitCode=143 Dec 09 17:20:31 crc kubenswrapper[4853]: I1209 17:20:31.066302 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76121d5b-66a5-40ee-a70e-7bca2c6151e4","Type":"ContainerDied","Data":"b2d9a7738ad9337273c9431ab6dda07d916ec1ddebd4442ebe36d72a27f2309b"} Dec 09 17:20:31 crc kubenswrapper[4853]: I1209 17:20:31.066994 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76121d5b-66a5-40ee-a70e-7bca2c6151e4","Type":"ContainerDied","Data":"3b022869b48ce0100514544d4d80ffaccc6b6cba9ffe9cf7b16251f05faf53af"} Dec 09 17:20:31 crc kubenswrapper[4853]: I1209 17:20:31.070790 4853 generic.go:334] "Generic (PLEG): container finished" podID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerID="7f3870d42c927549d321ac5f68893c3f30a021c1dae2d921b7f5551047473dbd" exitCode=0 Dec 09 17:20:31 crc kubenswrapper[4853]: I1209 17:20:31.071001 4853 generic.go:334] "Generic (PLEG): container finished" podID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerID="edfdd15f80661202229bbe80c3a9ba0548cfc43ad196668343c4e3727bc1e9e3" exitCode=143 Dec 09 17:20:31 crc kubenswrapper[4853]: I1209 17:20:31.070970 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617","Type":"ContainerDied","Data":"7f3870d42c927549d321ac5f68893c3f30a021c1dae2d921b7f5551047473dbd"} Dec 09 17:20:31 crc kubenswrapper[4853]: I1209 17:20:31.071932 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617","Type":"ContainerDied","Data":"edfdd15f80661202229bbe80c3a9ba0548cfc43ad196668343c4e3727bc1e9e3"} Dec 09 17:20:32 crc kubenswrapper[4853]: I1209 17:20:32.083713 4853 generic.go:334] "Generic (PLEG): container finished" podID="dbb4a5d2-dddd-4567-846c-2f81a64049f6" containerID="34d351e1137aebe3693894cce3ec63de6298d7392d97324cd32e5a56305557f3" exitCode=0 Dec 09 17:20:32 crc kubenswrapper[4853]: I1209 17:20:32.083786 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lk4j5" event={"ID":"dbb4a5d2-dddd-4567-846c-2f81a64049f6","Type":"ContainerDied","Data":"34d351e1137aebe3693894cce3ec63de6298d7392d97324cd32e5a56305557f3"} Dec 09 17:20:34 crc kubenswrapper[4853]: I1209 17:20:34.722817 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:20:34 crc kubenswrapper[4853]: I1209 17:20:34.795316 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5l46m"] Dec 09 17:20:34 crc kubenswrapper[4853]: I1209 17:20:34.795610 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-5l46m" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerName="dnsmasq-dns" containerID="cri-o://f32a35b086e097d4708e93d68901d5f0e9cdc6931f1cb14d399cd8b5c6cecc2b" gracePeriod=10 Dec 09 17:20:35 crc kubenswrapper[4853]: I1209 17:20:35.147465 4853 generic.go:334] "Generic (PLEG): container finished" podID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerID="f32a35b086e097d4708e93d68901d5f0e9cdc6931f1cb14d399cd8b5c6cecc2b" exitCode=0 Dec 09 17:20:35 crc kubenswrapper[4853]: I1209 17:20:35.147510 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5l46m" event={"ID":"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c","Type":"ContainerDied","Data":"f32a35b086e097d4708e93d68901d5f0e9cdc6931f1cb14d399cd8b5c6cecc2b"} Dec 09 17:20:37 crc kubenswrapper[4853]: I1209 17:20:37.223010 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5l46m" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Dec 09 17:20:39 crc kubenswrapper[4853]: I1209 17:20:39.071338 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:39 crc kubenswrapper[4853]: I1209 17:20:39.077739 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:39 crc kubenswrapper[4853]: I1209 17:20:39.195834 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 17:20:41 crc kubenswrapper[4853]: E1209 17:20:41.599251 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 09 17:20:41 crc kubenswrapper[4853]: E1209 17:20:41.600096 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bt9zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-696ml_openstack(b86b595d-63e4-41f1-979f-4a82cc01b136): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:20:41 crc kubenswrapper[4853]: E1209 17:20:41.601294 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-696ml" podUID="b86b595d-63e4-41f1-979f-4a82cc01b136" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.721664 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.724613 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.808458 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-fernet-keys\") pod \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.808520 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-scripts\") pod \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.808820 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-combined-ca-bundle\") pod \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.808899 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.808999 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-combined-ca-bundle\") pod \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.809022 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-config-data\") pod \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.809060 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxhht\" (UniqueName: \"kubernetes.io/projected/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-kube-api-access-bxhht\") pod \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.809088 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-httpd-run\") pod \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.809104 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-scripts\") pod \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.809142 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stldv\" (UniqueName: \"kubernetes.io/projected/dbb4a5d2-dddd-4567-846c-2f81a64049f6-kube-api-access-stldv\") pod \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.809218 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-logs\") pod \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.809403 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-credential-keys\") pod \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\" (UID: \"dbb4a5d2-dddd-4567-846c-2f81a64049f6\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.809451 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-config-data\") pod \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\" (UID: \"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617\") " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.812249 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-logs" (OuterVolumeSpecName: "logs") pod "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" (UID: "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.812490 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" (UID: "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.816089 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.816120 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.818050 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-scripts" (OuterVolumeSpecName: "scripts") pod "dbb4a5d2-dddd-4567-846c-2f81a64049f6" (UID: "dbb4a5d2-dddd-4567-846c-2f81a64049f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.818784 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" (UID: "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.825679 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dbb4a5d2-dddd-4567-846c-2f81a64049f6" (UID: "dbb4a5d2-dddd-4567-846c-2f81a64049f6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.828019 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb4a5d2-dddd-4567-846c-2f81a64049f6-kube-api-access-stldv" (OuterVolumeSpecName: "kube-api-access-stldv") pod "dbb4a5d2-dddd-4567-846c-2f81a64049f6" (UID: "dbb4a5d2-dddd-4567-846c-2f81a64049f6"). InnerVolumeSpecName "kube-api-access-stldv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.830803 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-kube-api-access-bxhht" (OuterVolumeSpecName: "kube-api-access-bxhht") pod "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" (UID: "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617"). InnerVolumeSpecName "kube-api-access-bxhht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.839862 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dbb4a5d2-dddd-4567-846c-2f81a64049f6" (UID: "dbb4a5d2-dddd-4567-846c-2f81a64049f6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.839908 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-scripts" (OuterVolumeSpecName: "scripts") pod "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" (UID: "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.871730 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" (UID: "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.879321 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-config-data" (OuterVolumeSpecName: "config-data") pod "dbb4a5d2-dddd-4567-846c-2f81a64049f6" (UID: "dbb4a5d2-dddd-4567-846c-2f81a64049f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.890409 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-config-data" (OuterVolumeSpecName: "config-data") pod "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" (UID: "6eae483a-ec55-4a1d-9e21-9d2d0e4b0617"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918252 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918497 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918506 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918527 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918536 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918545 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxhht\" (UniqueName: \"kubernetes.io/projected/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-kube-api-access-bxhht\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918553 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918561 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stldv\" (UniqueName: \"kubernetes.io/projected/dbb4a5d2-dddd-4567-846c-2f81a64049f6-kube-api-access-stldv\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918568 4853 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.918576 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.928502 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbb4a5d2-dddd-4567-846c-2f81a64049f6" (UID: "dbb4a5d2-dddd-4567-846c-2f81a64049f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:41 crc kubenswrapper[4853]: I1209 17:20:41.956275 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.022611 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.022655 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb4a5d2-dddd-4567-846c-2f81a64049f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.139194 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.139590 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjtb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k5v9z_openstack(20655566-5ed0-4732-835a-0bd04a51988f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.142484 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-k5v9z" podUID="20655566-5ed0-4732-835a-0bd04a51988f" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.144649 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.218722 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76121d5b-66a5-40ee-a70e-7bca2c6151e4","Type":"ContainerDied","Data":"3deb0c7c347004228d7e7bb7df7525c3e7f433aabd622746ab7a8e49bb820310"} Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.218767 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.218773 4853 scope.go:117] "RemoveContainer" containerID="b2d9a7738ad9337273c9431ab6dda07d916ec1ddebd4442ebe36d72a27f2309b" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.220817 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lk4j5" event={"ID":"dbb4a5d2-dddd-4567-846c-2f81a64049f6","Type":"ContainerDied","Data":"273ae80f7e9e4ce4d54501fc2be5090d15ab6b79d4b234ffbe822aa1afa37bd0"} Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.220843 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="273ae80f7e9e4ce4d54501fc2be5090d15ab6b79d4b234ffbe822aa1afa37bd0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.220854 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lk4j5" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.225103 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eae483a-ec55-4a1d-9e21-9d2d0e4b0617","Type":"ContainerDied","Data":"912baa2fa3878c9110911723fced2c56733760a205b354beaf97a1153e0dc619"} Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.225237 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.226051 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-httpd-run\") pod \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.226099 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqf52\" (UniqueName: \"kubernetes.io/projected/76121d5b-66a5-40ee-a70e-7bca2c6151e4-kube-api-access-mqf52\") pod \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.226121 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.226568 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-scripts\") pod \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.226709 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-combined-ca-bundle\") pod \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.226756 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-config-data\") pod \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.226972 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-logs\") pod \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\" (UID: \"76121d5b-66a5-40ee-a70e-7bca2c6151e4\") " Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.227458 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "76121d5b-66a5-40ee-a70e-7bca2c6151e4" (UID: "76121d5b-66a5-40ee-a70e-7bca2c6151e4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.227724 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-696ml" podUID="b86b595d-63e4-41f1-979f-4a82cc01b136" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.227751 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-k5v9z" podUID="20655566-5ed0-4732-835a-0bd04a51988f" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.228268 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-logs" (OuterVolumeSpecName: "logs") pod "76121d5b-66a5-40ee-a70e-7bca2c6151e4" (UID: "76121d5b-66a5-40ee-a70e-7bca2c6151e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.230292 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.230818 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76121d5b-66a5-40ee-a70e-7bca2c6151e4-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.232240 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-scripts" (OuterVolumeSpecName: "scripts") pod "76121d5b-66a5-40ee-a70e-7bca2c6151e4" (UID: "76121d5b-66a5-40ee-a70e-7bca2c6151e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.233869 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "76121d5b-66a5-40ee-a70e-7bca2c6151e4" (UID: "76121d5b-66a5-40ee-a70e-7bca2c6151e4"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.234213 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76121d5b-66a5-40ee-a70e-7bca2c6151e4-kube-api-access-mqf52" (OuterVolumeSpecName: "kube-api-access-mqf52") pod "76121d5b-66a5-40ee-a70e-7bca2c6151e4" (UID: "76121d5b-66a5-40ee-a70e-7bca2c6151e4"). InnerVolumeSpecName "kube-api-access-mqf52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.287218 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76121d5b-66a5-40ee-a70e-7bca2c6151e4" (UID: "76121d5b-66a5-40ee-a70e-7bca2c6151e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.312734 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-config-data" (OuterVolumeSpecName: "config-data") pod "76121d5b-66a5-40ee-a70e-7bca2c6151e4" (UID: "76121d5b-66a5-40ee-a70e-7bca2c6151e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.318940 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.330910 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.332236 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqf52\" (UniqueName: \"kubernetes.io/projected/76121d5b-66a5-40ee-a70e-7bca2c6151e4-kube-api-access-mqf52\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.332276 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.332288 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.332298 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.332309 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76121d5b-66a5-40ee-a70e-7bca2c6151e4-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.365886 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.366099 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.366670 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a469b8b-0a53-4628-a940-d589b48baae6" containerName="init" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.366694 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a469b8b-0a53-4628-a940-d589b48baae6" containerName="init" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.366708 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756e7a63-0b67-4fbf-b7e6-62f06f087a42" containerName="init" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.366715 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="756e7a63-0b67-4fbf-b7e6-62f06f087a42" containerName="init" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.366739 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerName="glance-httpd" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.366748 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerName="glance-httpd" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.366784 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerName="glance-httpd" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.366793 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerName="glance-httpd" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.366810 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerName="glance-log" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.366818 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerName="glance-log" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.367073 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb4a5d2-dddd-4567-846c-2f81a64049f6" containerName="keystone-bootstrap" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.367091 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb4a5d2-dddd-4567-846c-2f81a64049f6" containerName="keystone-bootstrap" Dec 09 17:20:42 crc kubenswrapper[4853]: E1209 17:20:42.367117 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerName="glance-log" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.367124 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerName="glance-log" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.367643 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerName="glance-log" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.367668 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerName="glance-httpd" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.367691 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" containerName="glance-log" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.367710 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="756e7a63-0b67-4fbf-b7e6-62f06f087a42" containerName="init" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.367725 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb4a5d2-dddd-4567-846c-2f81a64049f6" containerName="keystone-bootstrap" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.367741 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" containerName="glance-httpd" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.367751 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a469b8b-0a53-4628-a940-d589b48baae6" containerName="init" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.369276 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.376342 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.376387 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.379284 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.433588 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-scripts\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.433645 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.433679 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-logs\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.433740 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.433833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbwr\" (UniqueName: \"kubernetes.io/projected/9add26f6-37de-4ffc-9426-fab101257314-kube-api-access-2hbwr\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.433888 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-config-data\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.433978 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.434053 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.434152 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.535844 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.538769 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbwr\" (UniqueName: \"kubernetes.io/projected/9add26f6-37de-4ffc-9426-fab101257314-kube-api-access-2hbwr\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.538892 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-config-data\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.538941 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.539072 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.539159 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-scripts\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.539186 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.539221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-logs\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.539462 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.539821 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-logs\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.540229 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.542129 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.546524 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-scripts\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.549462 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.550522 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-config-data\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.563175 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbwr\" (UniqueName: \"kubernetes.io/projected/9add26f6-37de-4ffc-9426-fab101257314-kube-api-access-2hbwr\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.570056 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.582235 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.591614 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.601668 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.603541 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.605920 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.605991 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.620087 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.640744 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t4wh\" (UniqueName: \"kubernetes.io/projected/26ebb774-866f-478e-8ef2-7cfbd141887b-kube-api-access-6t4wh\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.640804 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.640894 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.641015 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.641098 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-logs\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.641141 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.641280 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.641359 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.702098 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.743734 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.743803 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-logs\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.743844 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.743930 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.744139 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.744893 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.744998 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t4wh\" (UniqueName: \"kubernetes.io/projected/26ebb774-866f-478e-8ef2-7cfbd141887b-kube-api-access-6t4wh\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.745035 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.745106 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.745505 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.745855 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-logs\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.747728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.748825 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.749819 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.753865 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.762138 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t4wh\" (UniqueName: \"kubernetes.io/projected/26ebb774-866f-478e-8ef2-7cfbd141887b-kube-api-access-6t4wh\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.806856 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.877889 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lk4j5"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.890434 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lk4j5"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.975248 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r6pjt"] Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.976968 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.979455 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrrnz" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.979851 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.980290 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.980414 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.980480 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 17:20:42 crc kubenswrapper[4853]: I1209 17:20:42.988168 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r6pjt"] Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.034144 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.052962 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-combined-ca-bundle\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.053186 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-credential-keys\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.053255 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-config-data\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.053344 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-fernet-keys\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.053383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-scripts\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.054037 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt8kl\" (UniqueName: \"kubernetes.io/projected/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-kube-api-access-nt8kl\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.155830 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-credential-keys\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.155881 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-config-data\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.155940 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-fernet-keys\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.155969 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-scripts\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.156015 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt8kl\" (UniqueName: \"kubernetes.io/projected/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-kube-api-access-nt8kl\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.156070 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-combined-ca-bundle\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.160361 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-combined-ca-bundle\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.160498 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-config-data\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.161042 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-scripts\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.161222 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-fernet-keys\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.161571 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-credential-keys\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.178709 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt8kl\" (UniqueName: \"kubernetes.io/projected/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-kube-api-access-nt8kl\") pod \"keystone-bootstrap-r6pjt\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.309784 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.585172 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eae483a-ec55-4a1d-9e21-9d2d0e4b0617" path="/var/lib/kubelet/pods/6eae483a-ec55-4a1d-9e21-9d2d0e4b0617/volumes" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.586623 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76121d5b-66a5-40ee-a70e-7bca2c6151e4" path="/var/lib/kubelet/pods/76121d5b-66a5-40ee-a70e-7bca2c6151e4/volumes" Dec 09 17:20:43 crc kubenswrapper[4853]: I1209 17:20:43.588690 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb4a5d2-dddd-4567-846c-2f81a64049f6" path="/var/lib/kubelet/pods/dbb4a5d2-dddd-4567-846c-2f81a64049f6/volumes" Dec 09 17:20:46 crc kubenswrapper[4853]: I1209 17:20:46.271892 4853 generic.go:334] "Generic (PLEG): container finished" podID="b9742aa9-091a-499a-8fa7-49295b5e9488" containerID="782ea6c9cdd1d1202ff5bd634bd800810fe1aa8d95ac496fb687f6d92285cbe8" exitCode=0 Dec 09 17:20:46 crc kubenswrapper[4853]: I1209 17:20:46.271959 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4pvgk" event={"ID":"b9742aa9-091a-499a-8fa7-49295b5e9488","Type":"ContainerDied","Data":"782ea6c9cdd1d1202ff5bd634bd800810fe1aa8d95ac496fb687f6d92285cbe8"} Dec 09 17:20:47 crc kubenswrapper[4853]: I1209 17:20:47.223889 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5l46m" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Dec 09 17:20:50 crc kubenswrapper[4853]: E1209 17:20:50.652259 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 09 17:20:50 crc kubenswrapper[4853]: E1209 17:20:50.652775 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n669hb7h5c4h677h79h85h656h68fh64dhd6h9fh586h5b4h7ch5b4hddh67dh67ch75h85h87h698hddh545h576h6h65bh57h676hb4h657h9bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9cb5f280-f2fd-425a-adc9-58ef46c2afa1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.745013 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.756364 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.919677 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ktxj\" (UniqueName: \"kubernetes.io/projected/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-kube-api-access-5ktxj\") pod \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.919767 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-config\") pod \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.919800 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-sb\") pod \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.919823 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-config\") pod \"b9742aa9-091a-499a-8fa7-49295b5e9488\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.919853 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-dns-svc\") pod \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.919891 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-nb\") pod \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\" (UID: \"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c\") " Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.919985 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-combined-ca-bundle\") pod \"b9742aa9-091a-499a-8fa7-49295b5e9488\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.920004 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwh9p\" (UniqueName: \"kubernetes.io/projected/b9742aa9-091a-499a-8fa7-49295b5e9488-kube-api-access-nwh9p\") pod \"b9742aa9-091a-499a-8fa7-49295b5e9488\" (UID: \"b9742aa9-091a-499a-8fa7-49295b5e9488\") " Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.927261 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9742aa9-091a-499a-8fa7-49295b5e9488-kube-api-access-nwh9p" (OuterVolumeSpecName: "kube-api-access-nwh9p") pod "b9742aa9-091a-499a-8fa7-49295b5e9488" (UID: "b9742aa9-091a-499a-8fa7-49295b5e9488"). InnerVolumeSpecName "kube-api-access-nwh9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.928998 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-kube-api-access-5ktxj" (OuterVolumeSpecName: "kube-api-access-5ktxj") pod "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" (UID: "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c"). InnerVolumeSpecName "kube-api-access-5ktxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.957385 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-config" (OuterVolumeSpecName: "config") pod "b9742aa9-091a-499a-8fa7-49295b5e9488" (UID: "b9742aa9-091a-499a-8fa7-49295b5e9488"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.973501 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9742aa9-091a-499a-8fa7-49295b5e9488" (UID: "b9742aa9-091a-499a-8fa7-49295b5e9488"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:20:50 crc kubenswrapper[4853]: I1209 17:20:50.984210 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" (UID: "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.002279 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" (UID: "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.004219 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-config" (OuterVolumeSpecName: "config") pod "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" (UID: "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.013506 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" (UID: "ca2a9b0d-f643-4e8a-8076-b76e5a8e703c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.022241 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.022382 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwh9p\" (UniqueName: \"kubernetes.io/projected/b9742aa9-091a-499a-8fa7-49295b5e9488-kube-api-access-nwh9p\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.022473 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ktxj\" (UniqueName: \"kubernetes.io/projected/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-kube-api-access-5ktxj\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.022528 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.022615 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.022698 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9742aa9-091a-499a-8fa7-49295b5e9488-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.022766 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.022828 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.328018 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5l46m" event={"ID":"ca2a9b0d-f643-4e8a-8076-b76e5a8e703c","Type":"ContainerDied","Data":"d816e16e384c43fa8dd8495e0d8f3ae4a1ffc89f0c038f607c2a6d0c290a9e30"} Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.328044 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5l46m" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.329400 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4pvgk" event={"ID":"b9742aa9-091a-499a-8fa7-49295b5e9488","Type":"ContainerDied","Data":"0f35cccc99c5101a131beb575c96c65c825ea4ce6e44de83ce78fa1acc129201"} Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.329426 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f35cccc99c5101a131beb575c96c65c825ea4ce6e44de83ce78fa1acc129201" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.329455 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4pvgk" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.388922 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5l46m"] Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.398241 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5l46m"] Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.580925 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" path="/var/lib/kubelet/pods/ca2a9b0d-f643-4e8a-8076-b76e5a8e703c/volumes" Dec 09 17:20:51 crc kubenswrapper[4853]: E1209 17:20:51.860403 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 09 17:20:51 crc kubenswrapper[4853]: E1209 17:20:51.860824 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbr7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tpt7s_openstack(18c4cb93-d59f-4160-9e4d-506184f49afe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 17:20:51 crc kubenswrapper[4853]: E1209 17:20:51.862198 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tpt7s" podUID="18c4cb93-d59f-4160-9e4d-506184f49afe" Dec 09 17:20:51 crc kubenswrapper[4853]: I1209 17:20:51.901743 4853 scope.go:117] "RemoveContainer" containerID="3b022869b48ce0100514544d4d80ffaccc6b6cba9ffe9cf7b16251f05faf53af" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.072484 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2k95k"] Dec 09 17:20:52 crc kubenswrapper[4853]: E1209 17:20:52.072937 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9742aa9-091a-499a-8fa7-49295b5e9488" containerName="neutron-db-sync" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.072949 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9742aa9-091a-499a-8fa7-49295b5e9488" containerName="neutron-db-sync" Dec 09 17:20:52 crc kubenswrapper[4853]: E1209 17:20:52.072967 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerName="dnsmasq-dns" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.072974 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerName="dnsmasq-dns" Dec 09 17:20:52 crc kubenswrapper[4853]: E1209 17:20:52.072991 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerName="init" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.072998 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerName="init" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.073237 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9742aa9-091a-499a-8fa7-49295b5e9488" containerName="neutron-db-sync" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.073260 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerName="dnsmasq-dns" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.074380 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.104965 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2k95k"] Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.109785 4853 scope.go:117] "RemoveContainer" containerID="7f3870d42c927549d321ac5f68893c3f30a021c1dae2d921b7f5551047473dbd" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.121410 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6684749cc6-45h7r"] Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.127525 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.130388 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-lrpt2" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.130678 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.130817 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.132320 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.136353 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6684749cc6-45h7r"] Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.210400 4853 scope.go:117] "RemoveContainer" containerID="edfdd15f80661202229bbe80c3a9ba0548cfc43ad196668343c4e3727bc1e9e3" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.224649 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-5l46m" podUID="ca2a9b0d-f643-4e8a-8076-b76e5a8e703c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.258956 4853 scope.go:117] "RemoveContainer" containerID="f32a35b086e097d4708e93d68901d5f0e9cdc6931f1cb14d399cd8b5c6cecc2b" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262433 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26mrf\" (UniqueName: \"kubernetes.io/projected/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-kube-api-access-26mrf\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262464 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262498 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-ovndb-tls-certs\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262632 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klcrz\" (UniqueName: \"kubernetes.io/projected/c09d1c23-e621-474b-ac1f-554a69baff26-kube-api-access-klcrz\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262665 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-httpd-config\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262704 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-config\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262723 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-config\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262737 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262797 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-svc\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262825 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-combined-ca-bundle\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.262954 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.287625 4853 scope.go:117] "RemoveContainer" containerID="5905aecb83fc657c85c76099cef84761ecf6d4d4eb423b1d76db1eda66bacb4f" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.350553 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fs9xd" event={"ID":"03b7745a-2365-4bf7-951f-2faa6a046b18","Type":"ContainerStarted","Data":"3a77dacf3400d5080df7efe79b77a51be064e428b0c5f8ae5548519158df0a4d"} Dec 09 17:20:52 crc kubenswrapper[4853]: E1209 17:20:52.359067 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tpt7s" podUID="18c4cb93-d59f-4160-9e4d-506184f49afe" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.370924 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klcrz\" (UniqueName: \"kubernetes.io/projected/c09d1c23-e621-474b-ac1f-554a69baff26-kube-api-access-klcrz\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.370983 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-httpd-config\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.371010 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-config\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.371028 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-config\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.371043 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.371069 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-svc\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.371091 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-combined-ca-bundle\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.371161 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.371211 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26mrf\" (UniqueName: \"kubernetes.io/projected/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-kube-api-access-26mrf\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.371228 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.371244 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-ovndb-tls-certs\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.372895 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.374189 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-config\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.376068 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.377683 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.379136 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fs9xd" podStartSLOduration=3.036466916 podStartE2EDuration="31.379125956s" podCreationTimestamp="2025-12-09 17:20:21 +0000 UTC" firstStartedPulling="2025-12-09 17:20:23.518717057 +0000 UTC m=+1450.453456239" lastFinishedPulling="2025-12-09 17:20:51.861376097 +0000 UTC m=+1478.796115279" observedRunningTime="2025-12-09 17:20:52.371932014 +0000 UTC m=+1479.306671196" watchObservedRunningTime="2025-12-09 17:20:52.379125956 +0000 UTC m=+1479.313865138" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.379750 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-svc\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.380632 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-combined-ca-bundle\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.380912 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-httpd-config\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.381253 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-config\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.400665 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klcrz\" (UniqueName: \"kubernetes.io/projected/c09d1c23-e621-474b-ac1f-554a69baff26-kube-api-access-klcrz\") pod \"dnsmasq-dns-55f844cf75-2k95k\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.407986 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-ovndb-tls-certs\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.411661 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26mrf\" (UniqueName: \"kubernetes.io/projected/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-kube-api-access-26mrf\") pod \"neutron-6684749cc6-45h7r\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.412163 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.515698 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.576410 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r6pjt"] Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.711640 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:20:52 crc kubenswrapper[4853]: I1209 17:20:52.808195 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:20:53 crc kubenswrapper[4853]: I1209 17:20:53.004431 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2k95k"] Dec 09 17:20:53 crc kubenswrapper[4853]: I1209 17:20:53.288079 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6684749cc6-45h7r"] Dec 09 17:20:53 crc kubenswrapper[4853]: I1209 17:20:53.386026 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9add26f6-37de-4ffc-9426-fab101257314","Type":"ContainerStarted","Data":"d16dfa4a11d62844cfe141697d707749a496ed2ae9ee33b5c71f2e92bf56a3dd"} Dec 09 17:20:53 crc kubenswrapper[4853]: I1209 17:20:53.406729 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6pjt" event={"ID":"3a31d060-259e-4f8d-bb05-3d6cc9b198d1","Type":"ContainerStarted","Data":"61387370b3b8eb20b7bcedcd8bd1d63a30e0efc3992aea872599fe312754db22"} Dec 09 17:20:53 crc kubenswrapper[4853]: I1209 17:20:53.406782 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6pjt" event={"ID":"3a31d060-259e-4f8d-bb05-3d6cc9b198d1","Type":"ContainerStarted","Data":"ce9169f814f41af83d204f94c22600167e3bf80f1a93f2986e137b0dbee886f7"} Dec 09 17:20:53 crc kubenswrapper[4853]: I1209 17:20:53.413075 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" event={"ID":"c09d1c23-e621-474b-ac1f-554a69baff26","Type":"ContainerStarted","Data":"c5f8bc12f791f3e5c520f102fb48488c12852aeff004f18ad47afd9d1db7637c"} Dec 09 17:20:53 crc kubenswrapper[4853]: I1209 17:20:53.430723 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebb774-866f-478e-8ef2-7cfbd141887b","Type":"ContainerStarted","Data":"b77fcd64f4f7b5d3c333020469582f3a1f2d30ff0af7b4af4921823f7ec50006"} Dec 09 17:20:53 crc kubenswrapper[4853]: I1209 17:20:53.433454 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r6pjt" podStartSLOduration=11.433434 podStartE2EDuration="11.433434s" podCreationTimestamp="2025-12-09 17:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:53.431025532 +0000 UTC m=+1480.365764704" watchObservedRunningTime="2025-12-09 17:20:53.433434 +0000 UTC m=+1480.368173182" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.463665 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-645985f88c-fqxds"] Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.468714 4853 generic.go:334] "Generic (PLEG): container finished" podID="c09d1c23-e621-474b-ac1f-554a69baff26" containerID="864fbb795ee8c67fee70ea9e1f4d8f4c25de89e3935eabae7e7e69dbb994f7eb" exitCode=0 Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.473989 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9add26f6-37de-4ffc-9426-fab101257314","Type":"ContainerStarted","Data":"2e3919b681bf24a210f482fd4673a0d4a5676c928d9e0a4b044236f9d7e38902"} Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.474146 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" event={"ID":"c09d1c23-e621-474b-ac1f-554a69baff26","Type":"ContainerDied","Data":"864fbb795ee8c67fee70ea9e1f4d8f4c25de89e3935eabae7e7e69dbb994f7eb"} Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.475030 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.477520 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.478067 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.480981 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-645985f88c-fqxds"] Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.487288 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebb774-866f-478e-8ef2-7cfbd141887b","Type":"ContainerStarted","Data":"e3c040a04b35d4f21410f73ef47c654dc2bc735c0618450d4640188fdcbaf697"} Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.582235 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-public-tls-certs\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.582658 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-internal-tls-certs\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.582981 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-combined-ca-bundle\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.583175 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-ovndb-tls-certs\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.583361 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-httpd-config\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.583584 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-config\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.583845 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7pf7\" (UniqueName: \"kubernetes.io/projected/481f80f1-29b5-4bdc-af3a-8c6cea94774f-kube-api-access-m7pf7\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.685967 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-combined-ca-bundle\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.686047 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-ovndb-tls-certs\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.686091 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-httpd-config\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.686152 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-config\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.686210 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7pf7\" (UniqueName: \"kubernetes.io/projected/481f80f1-29b5-4bdc-af3a-8c6cea94774f-kube-api-access-m7pf7\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.686339 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-public-tls-certs\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.686398 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-internal-tls-certs\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.697797 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-ovndb-tls-certs\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.698320 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-config\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.699348 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-internal-tls-certs\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.700213 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-httpd-config\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.702926 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-public-tls-certs\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.706676 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481f80f1-29b5-4bdc-af3a-8c6cea94774f-combined-ca-bundle\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.710408 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7pf7\" (UniqueName: \"kubernetes.io/projected/481f80f1-29b5-4bdc-af3a-8c6cea94774f-kube-api-access-m7pf7\") pod \"neutron-645985f88c-fqxds\" (UID: \"481f80f1-29b5-4bdc-af3a-8c6cea94774f\") " pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:54 crc kubenswrapper[4853]: I1209 17:20:54.797423 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:20:58 crc kubenswrapper[4853]: W1209 17:20:58.095130 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a04a5a0_6547_4374_bc53_7b0ae86adf2b.slice/crio-47388fcf25901e059d38c535c00de2ec98e4a2d1698c56b6337ba28293c79fad WatchSource:0}: Error finding container 47388fcf25901e059d38c535c00de2ec98e4a2d1698c56b6337ba28293c79fad: Status 404 returned error can't find the container with id 47388fcf25901e059d38c535c00de2ec98e4a2d1698c56b6337ba28293c79fad Dec 09 17:20:58 crc kubenswrapper[4853]: I1209 17:20:58.544010 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684749cc6-45h7r" event={"ID":"3a04a5a0-6547-4374-bc53-7b0ae86adf2b","Type":"ContainerStarted","Data":"47388fcf25901e059d38c535c00de2ec98e4a2d1698c56b6337ba28293c79fad"} Dec 09 17:20:58 crc kubenswrapper[4853]: I1209 17:20:58.774814 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-645985f88c-fqxds"] Dec 09 17:20:58 crc kubenswrapper[4853]: W1209 17:20:58.790214 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod481f80f1_29b5_4bdc_af3a_8c6cea94774f.slice/crio-b4ab3ef4d6f4de1889cdfe9eba15ec09107067465c68375f1af9debe55475e85 WatchSource:0}: Error finding container b4ab3ef4d6f4de1889cdfe9eba15ec09107067465c68375f1af9debe55475e85: Status 404 returned error can't find the container with id b4ab3ef4d6f4de1889cdfe9eba15ec09107067465c68375f1af9debe55475e85 Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.558724 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" event={"ID":"c09d1c23-e621-474b-ac1f-554a69baff26","Type":"ContainerStarted","Data":"d05a593015fd7316887edcc3614be3bc520fe2b79eee8c72b1e143203475162f"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.559374 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.562333 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cb5f280-f2fd-425a-adc9-58ef46c2afa1","Type":"ContainerStarted","Data":"d997511fef1a2c72ccc4c40afea69980def8c109dd15e0ba31967fbbc8c3804c"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.565163 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-696ml" event={"ID":"b86b595d-63e4-41f1-979f-4a82cc01b136","Type":"ContainerStarted","Data":"f05f45d4d7c286f77bc1705355643879b0b1895d5c34a856b4a721b2d94b857e"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.579258 4853 generic.go:334] "Generic (PLEG): container finished" podID="3a31d060-259e-4f8d-bb05-3d6cc9b198d1" containerID="61387370b3b8eb20b7bcedcd8bd1d63a30e0efc3992aea872599fe312754db22" exitCode=0 Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.582861 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.582894 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k5v9z" event={"ID":"20655566-5ed0-4732-835a-0bd04a51988f","Type":"ContainerStarted","Data":"061f55522b9662cff917876d7f9a1914135d687e2afc6fd1d9cf19ef694d8da9"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.582917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684749cc6-45h7r" event={"ID":"3a04a5a0-6547-4374-bc53-7b0ae86adf2b","Type":"ContainerStarted","Data":"2cdbe0b9fbd8f63f92598fac2e6d85452e540640ffdb246be4ba33eeefa48181"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.582931 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684749cc6-45h7r" event={"ID":"3a04a5a0-6547-4374-bc53-7b0ae86adf2b","Type":"ContainerStarted","Data":"3f32fc6c70afbf4e6f684b901c7682c1de73579e865c7ed2a6ff74901c708bf2"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.582945 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645985f88c-fqxds" event={"ID":"481f80f1-29b5-4bdc-af3a-8c6cea94774f","Type":"ContainerStarted","Data":"b4ab3ef4d6f4de1889cdfe9eba15ec09107067465c68375f1af9debe55475e85"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.582957 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6pjt" event={"ID":"3a31d060-259e-4f8d-bb05-3d6cc9b198d1","Type":"ContainerDied","Data":"61387370b3b8eb20b7bcedcd8bd1d63a30e0efc3992aea872599fe312754db22"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.590274 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" podStartSLOduration=8.590250855 podStartE2EDuration="8.590250855s" podCreationTimestamp="2025-12-09 17:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:59.583370472 +0000 UTC m=+1486.518109654" watchObservedRunningTime="2025-12-09 17:20:59.590250855 +0000 UTC m=+1486.524990037" Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.606746 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6684749cc6-45h7r" podStartSLOduration=7.606721916 podStartE2EDuration="7.606721916s" podCreationTimestamp="2025-12-09 17:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:20:59.600633296 +0000 UTC m=+1486.535372498" watchObservedRunningTime="2025-12-09 17:20:59.606721916 +0000 UTC m=+1486.541461108" Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.658151 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-696ml" podStartSLOduration=5.073470156 podStartE2EDuration="39.658131734s" podCreationTimestamp="2025-12-09 17:20:20 +0000 UTC" firstStartedPulling="2025-12-09 17:20:23.701211534 +0000 UTC m=+1450.635950716" lastFinishedPulling="2025-12-09 17:20:58.285873112 +0000 UTC m=+1485.220612294" observedRunningTime="2025-12-09 17:20:59.654408971 +0000 UTC m=+1486.589148143" watchObservedRunningTime="2025-12-09 17:20:59.658131734 +0000 UTC m=+1486.592870936" Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:20:59.671984 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-k5v9z" podStartSLOduration=4.753780519 podStartE2EDuration="39.671965451s" podCreationTimestamp="2025-12-09 17:20:20 +0000 UTC" firstStartedPulling="2025-12-09 17:20:23.306280142 +0000 UTC m=+1450.241019324" lastFinishedPulling="2025-12-09 17:20:58.224465074 +0000 UTC m=+1485.159204256" observedRunningTime="2025-12-09 17:20:59.669270516 +0000 UTC m=+1486.604009698" watchObservedRunningTime="2025-12-09 17:20:59.671965451 +0000 UTC m=+1486.606704633" Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:21:00.592418 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebb774-866f-478e-8ef2-7cfbd141887b","Type":"ContainerStarted","Data":"e6b74223f78c0e0010bc7c951980521974c85cbbe6a804780cd558381f49be16"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:21:00.595661 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9add26f6-37de-4ffc-9426-fab101257314","Type":"ContainerStarted","Data":"eb142153196e9c94ef8c428d32e9f7e12e3306e4cb66edb674ccc432893c8c3d"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:21:00.602128 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645985f88c-fqxds" event={"ID":"481f80f1-29b5-4bdc-af3a-8c6cea94774f","Type":"ContainerStarted","Data":"619be036489b561a39e0cbe05459779db5a7387bc16df9cc0c6571f5995efe20"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:21:00.602189 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645985f88c-fqxds" event={"ID":"481f80f1-29b5-4bdc-af3a-8c6cea94774f","Type":"ContainerStarted","Data":"ccad13bfb145a8a2accf352b4856891005db124afb2df52387b7b004e6fc4a95"} Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:21:00.658152 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-645985f88c-fqxds" podStartSLOduration=6.658131719 podStartE2EDuration="6.658131719s" podCreationTimestamp="2025-12-09 17:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:00.649020544 +0000 UTC m=+1487.583759726" watchObservedRunningTime="2025-12-09 17:21:00.658131719 +0000 UTC m=+1487.592870901" Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:21:00.662244 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=18.662229193 podStartE2EDuration="18.662229193s" podCreationTimestamp="2025-12-09 17:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:00.620008712 +0000 UTC m=+1487.554747894" watchObservedRunningTime="2025-12-09 17:21:00.662229193 +0000 UTC m=+1487.596968375" Dec 09 17:21:00 crc kubenswrapper[4853]: I1209 17:21:00.686870 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=18.686839632 podStartE2EDuration="18.686839632s" podCreationTimestamp="2025-12-09 17:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:00.672394508 +0000 UTC m=+1487.607133690" watchObservedRunningTime="2025-12-09 17:21:00.686839632 +0000 UTC m=+1487.621578814" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.142116 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.263923 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt8kl\" (UniqueName: \"kubernetes.io/projected/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-kube-api-access-nt8kl\") pod \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.263971 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-config-data\") pod \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.264113 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-scripts\") pod \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.264238 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-combined-ca-bundle\") pod \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.264269 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-credential-keys\") pod \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.264285 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-fernet-keys\") pod \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\" (UID: \"3a31d060-259e-4f8d-bb05-3d6cc9b198d1\") " Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.273668 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3a31d060-259e-4f8d-bb05-3d6cc9b198d1" (UID: "3a31d060-259e-4f8d-bb05-3d6cc9b198d1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.274473 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-kube-api-access-nt8kl" (OuterVolumeSpecName: "kube-api-access-nt8kl") pod "3a31d060-259e-4f8d-bb05-3d6cc9b198d1" (UID: "3a31d060-259e-4f8d-bb05-3d6cc9b198d1"). InnerVolumeSpecName "kube-api-access-nt8kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.293295 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-scripts" (OuterVolumeSpecName: "scripts") pod "3a31d060-259e-4f8d-bb05-3d6cc9b198d1" (UID: "3a31d060-259e-4f8d-bb05-3d6cc9b198d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.294744 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3a31d060-259e-4f8d-bb05-3d6cc9b198d1" (UID: "3a31d060-259e-4f8d-bb05-3d6cc9b198d1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.311355 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-config-data" (OuterVolumeSpecName: "config-data") pod "3a31d060-259e-4f8d-bb05-3d6cc9b198d1" (UID: "3a31d060-259e-4f8d-bb05-3d6cc9b198d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.331873 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a31d060-259e-4f8d-bb05-3d6cc9b198d1" (UID: "3a31d060-259e-4f8d-bb05-3d6cc9b198d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.366956 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.366998 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.367014 4853 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.367023 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.367033 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt8kl\" (UniqueName: \"kubernetes.io/projected/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-kube-api-access-nt8kl\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.367044 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a31d060-259e-4f8d-bb05-3d6cc9b198d1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.616919 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6pjt" event={"ID":"3a31d060-259e-4f8d-bb05-3d6cc9b198d1","Type":"ContainerDied","Data":"ce9169f814f41af83d204f94c22600167e3bf80f1a93f2986e137b0dbee886f7"} Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.616967 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce9169f814f41af83d204f94c22600167e3bf80f1a93f2986e137b0dbee886f7" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.616987 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6pjt" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.619750 4853 generic.go:334] "Generic (PLEG): container finished" podID="03b7745a-2365-4bf7-951f-2faa6a046b18" containerID="3a77dacf3400d5080df7efe79b77a51be064e428b0c5f8ae5548519158df0a4d" exitCode=0 Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.619883 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fs9xd" event={"ID":"03b7745a-2365-4bf7-951f-2faa6a046b18","Type":"ContainerDied","Data":"3a77dacf3400d5080df7efe79b77a51be064e428b0c5f8ae5548519158df0a4d"} Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.621707 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.736556 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-65d74d4db4-2bcdr"] Dec 09 17:21:01 crc kubenswrapper[4853]: E1209 17:21:01.737061 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a31d060-259e-4f8d-bb05-3d6cc9b198d1" containerName="keystone-bootstrap" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.737079 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a31d060-259e-4f8d-bb05-3d6cc9b198d1" containerName="keystone-bootstrap" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.737631 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a31d060-259e-4f8d-bb05-3d6cc9b198d1" containerName="keystone-bootstrap" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.738823 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.743638 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.743860 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrrnz" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.743973 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.744164 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.744304 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.744413 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.772039 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65d74d4db4-2bcdr"] Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.877969 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-combined-ca-bundle\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.878450 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-fernet-keys\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.878503 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-public-tls-certs\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.878563 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-internal-tls-certs\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.878683 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-config-data\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.878723 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nl7n\" (UniqueName: \"kubernetes.io/projected/e89854ae-ff97-4850-992f-14c38c2e1848-kube-api-access-6nl7n\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.880164 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-credential-keys\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.880216 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-scripts\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.982165 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-config-data\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.982218 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nl7n\" (UniqueName: \"kubernetes.io/projected/e89854ae-ff97-4850-992f-14c38c2e1848-kube-api-access-6nl7n\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.982251 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-credential-keys\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.982269 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-scripts\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.982328 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-combined-ca-bundle\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.982360 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-fernet-keys\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.982391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-public-tls-certs\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.982436 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-internal-tls-certs\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.988339 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-combined-ca-bundle\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.990560 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-internal-tls-certs\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.994837 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-credential-keys\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.994844 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-fernet-keys\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.995560 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-public-tls-certs\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:01 crc kubenswrapper[4853]: I1209 17:21:01.997886 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-scripts\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:02 crc kubenswrapper[4853]: I1209 17:21:02.001384 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89854ae-ff97-4850-992f-14c38c2e1848-config-data\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:02 crc kubenswrapper[4853]: I1209 17:21:02.007635 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nl7n\" (UniqueName: \"kubernetes.io/projected/e89854ae-ff97-4850-992f-14c38c2e1848-kube-api-access-6nl7n\") pod \"keystone-65d74d4db4-2bcdr\" (UID: \"e89854ae-ff97-4850-992f-14c38c2e1848\") " pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:02 crc kubenswrapper[4853]: I1209 17:21:02.193743 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:02 crc kubenswrapper[4853]: I1209 17:21:02.632369 4853 generic.go:334] "Generic (PLEG): container finished" podID="b86b595d-63e4-41f1-979f-4a82cc01b136" containerID="f05f45d4d7c286f77bc1705355643879b0b1895d5c34a856b4a721b2d94b857e" exitCode=0 Dec 09 17:21:02 crc kubenswrapper[4853]: I1209 17:21:02.633748 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-696ml" event={"ID":"b86b595d-63e4-41f1-979f-4a82cc01b136","Type":"ContainerDied","Data":"f05f45d4d7c286f77bc1705355643879b0b1895d5c34a856b4a721b2d94b857e"} Dec 09 17:21:02 crc kubenswrapper[4853]: I1209 17:21:02.702491 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 17:21:02 crc kubenswrapper[4853]: I1209 17:21:02.702554 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 17:21:02 crc kubenswrapper[4853]: I1209 17:21:02.822145 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 17:21:02 crc kubenswrapper[4853]: I1209 17:21:02.833804 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 17:21:03 crc kubenswrapper[4853]: I1209 17:21:03.035034 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:03 crc kubenswrapper[4853]: I1209 17:21:03.035279 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:03 crc kubenswrapper[4853]: I1209 17:21:03.094992 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:03 crc kubenswrapper[4853]: I1209 17:21:03.117765 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:03 crc kubenswrapper[4853]: I1209 17:21:03.644808 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 17:21:03 crc kubenswrapper[4853]: I1209 17:21:03.644855 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:03 crc kubenswrapper[4853]: I1209 17:21:03.644871 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 17:21:03 crc kubenswrapper[4853]: I1209 17:21:03.644886 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.609068 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.613205 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-696ml" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.679437 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-combined-ca-bundle\") pod \"03b7745a-2365-4bf7-951f-2faa6a046b18\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.679853 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfl48\" (UniqueName: \"kubernetes.io/projected/03b7745a-2365-4bf7-951f-2faa6a046b18-kube-api-access-rfl48\") pod \"03b7745a-2365-4bf7-951f-2faa6a046b18\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.681701 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fs9xd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.682014 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fs9xd" event={"ID":"03b7745a-2365-4bf7-951f-2faa6a046b18","Type":"ContainerDied","Data":"b6f9a3e0e27a842c69a389a8ad68adbb0528344fb76591bc19fcb879d09a5742"} Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.682058 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f9a3e0e27a842c69a389a8ad68adbb0528344fb76591bc19fcb879d09a5742" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.701023 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-696ml" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.701323 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-696ml" event={"ID":"b86b595d-63e4-41f1-979f-4a82cc01b136","Type":"ContainerDied","Data":"00125784a8f8efda05b0e4bd5d6876a02b9b6243c54413a54ff2c66cb9de5fdf"} Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.701362 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00125784a8f8efda05b0e4bd5d6876a02b9b6243c54413a54ff2c66cb9de5fdf" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.701674 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b7745a-2365-4bf7-951f-2faa6a046b18-kube-api-access-rfl48" (OuterVolumeSpecName: "kube-api-access-rfl48") pod "03b7745a-2365-4bf7-951f-2faa6a046b18" (UID: "03b7745a-2365-4bf7-951f-2faa6a046b18"). InnerVolumeSpecName "kube-api-access-rfl48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.779740 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5496596656-4sjvd"] Dec 09 17:21:04 crc kubenswrapper[4853]: E1209 17:21:04.780386 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b7745a-2365-4bf7-951f-2faa6a046b18" containerName="barbican-db-sync" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.780413 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b7745a-2365-4bf7-951f-2faa6a046b18" containerName="barbican-db-sync" Dec 09 17:21:04 crc kubenswrapper[4853]: E1209 17:21:04.780476 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86b595d-63e4-41f1-979f-4a82cc01b136" containerName="placement-db-sync" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.780486 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86b595d-63e4-41f1-979f-4a82cc01b136" containerName="placement-db-sync" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.781067 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b7745a-2365-4bf7-951f-2faa6a046b18" containerName="barbican-db-sync" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.781162 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86b595d-63e4-41f1-979f-4a82cc01b136" containerName="placement-db-sync" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.782684 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.782983 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-config-data\") pod \"b86b595d-63e4-41f1-979f-4a82cc01b136\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.783123 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-scripts\") pod \"b86b595d-63e4-41f1-979f-4a82cc01b136\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.783425 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt9zp\" (UniqueName: \"kubernetes.io/projected/b86b595d-63e4-41f1-979f-4a82cc01b136-kube-api-access-bt9zp\") pod \"b86b595d-63e4-41f1-979f-4a82cc01b136\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.783537 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b86b595d-63e4-41f1-979f-4a82cc01b136-logs\") pod \"b86b595d-63e4-41f1-979f-4a82cc01b136\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.783732 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-db-sync-config-data\") pod \"03b7745a-2365-4bf7-951f-2faa6a046b18\" (UID: \"03b7745a-2365-4bf7-951f-2faa6a046b18\") " Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.783809 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-combined-ca-bundle\") pod \"b86b595d-63e4-41f1-979f-4a82cc01b136\" (UID: \"b86b595d-63e4-41f1-979f-4a82cc01b136\") " Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.785288 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfl48\" (UniqueName: \"kubernetes.io/projected/03b7745a-2365-4bf7-951f-2faa6a046b18-kube-api-access-rfl48\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.785447 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.785731 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.786810 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b86b595d-63e4-41f1-979f-4a82cc01b136-logs" (OuterVolumeSpecName: "logs") pod "b86b595d-63e4-41f1-979f-4a82cc01b136" (UID: "b86b595d-63e4-41f1-979f-4a82cc01b136"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.791385 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03b7745a-2365-4bf7-951f-2faa6a046b18" (UID: "03b7745a-2365-4bf7-951f-2faa6a046b18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.812446 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5496596656-4sjvd"] Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.816608 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-scripts" (OuterVolumeSpecName: "scripts") pod "b86b595d-63e4-41f1-979f-4a82cc01b136" (UID: "b86b595d-63e4-41f1-979f-4a82cc01b136"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.816698 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "03b7745a-2365-4bf7-951f-2faa6a046b18" (UID: "03b7745a-2365-4bf7-951f-2faa6a046b18"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.816740 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86b595d-63e4-41f1-979f-4a82cc01b136-kube-api-access-bt9zp" (OuterVolumeSpecName: "kube-api-access-bt9zp") pod "b86b595d-63e4-41f1-979f-4a82cc01b136" (UID: "b86b595d-63e4-41f1-979f-4a82cc01b136"). InnerVolumeSpecName "kube-api-access-bt9zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.842761 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b86b595d-63e4-41f1-979f-4a82cc01b136" (UID: "b86b595d-63e4-41f1-979f-4a82cc01b136"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.859742 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-config-data" (OuterVolumeSpecName: "config-data") pod "b86b595d-63e4-41f1-979f-4a82cc01b136" (UID: "b86b595d-63e4-41f1-979f-4a82cc01b136"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.889133 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-combined-ca-bundle\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.889280 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-config-data\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.889543 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-internal-tls-certs\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.889614 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-scripts\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.889683 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6k4d\" (UniqueName: \"kubernetes.io/projected/9571bd10-147c-4016-af2c-0dc4df16ae63-kube-api-access-h6k4d\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.889730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-public-tls-certs\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.889883 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9571bd10-147c-4016-af2c-0dc4df16ae63-logs\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.890026 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b86b595d-63e4-41f1-979f-4a82cc01b136-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.890050 4853 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.890064 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.890076 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7745a-2365-4bf7-951f-2faa6a046b18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.890087 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.890098 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b595d-63e4-41f1-979f-4a82cc01b136-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.890111 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt9zp\" (UniqueName: \"kubernetes.io/projected/b86b595d-63e4-41f1-979f-4a82cc01b136-kube-api-access-bt9zp\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.991755 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-config-data\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.991877 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-internal-tls-certs\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.991904 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-scripts\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.991927 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6k4d\" (UniqueName: \"kubernetes.io/projected/9571bd10-147c-4016-af2c-0dc4df16ae63-kube-api-access-h6k4d\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.991949 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-public-tls-certs\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.992003 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9571bd10-147c-4016-af2c-0dc4df16ae63-logs\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.992068 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-combined-ca-bundle\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.993015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9571bd10-147c-4016-af2c-0dc4df16ae63-logs\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.995332 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-internal-tls-certs\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.995700 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-combined-ca-bundle\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:04 crc kubenswrapper[4853]: I1209 17:21:04.995968 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-scripts\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.001521 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-public-tls-certs\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.005038 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9571bd10-147c-4016-af2c-0dc4df16ae63-config-data\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.013375 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6k4d\" (UniqueName: \"kubernetes.io/projected/9571bd10-147c-4016-af2c-0dc4df16ae63-kube-api-access-h6k4d\") pod \"placement-5496596656-4sjvd\" (UID: \"9571bd10-147c-4016-af2c-0dc4df16ae63\") " pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.227335 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.751444 4853 generic.go:334] "Generic (PLEG): container finished" podID="20655566-5ed0-4732-835a-0bd04a51988f" containerID="061f55522b9662cff917876d7f9a1914135d687e2afc6fd1d9cf19ef694d8da9" exitCode=0 Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.752106 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k5v9z" event={"ID":"20655566-5ed0-4732-835a-0bd04a51988f","Type":"ContainerDied","Data":"061f55522b9662cff917876d7f9a1914135d687e2afc6fd1d9cf19ef694d8da9"} Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.879570 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-57459d985f-7pt4c"] Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.881525 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.888975 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.895731 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5lvjr" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.900532 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.923741 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth"] Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.929937 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.934719 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 09 17:21:05 crc kubenswrapper[4853]: I1209 17:21:05.959719 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57459d985f-7pt4c"] Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.005685 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2k95k"] Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.006008 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" podUID="c09d1c23-e621-474b-ac1f-554a69baff26" containerName="dnsmasq-dns" containerID="cri-o://d05a593015fd7316887edcc3614be3bc520fe2b79eee8c72b1e143203475162f" gracePeriod=10 Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.010035 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.030668 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth"] Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042035 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/281434f1-0f91-404d-8f13-2bbf97b18237-logs\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042113 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/281434f1-0f91-404d-8f13-2bbf97b18237-config-data-custom\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042155 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281434f1-0f91-404d-8f13-2bbf97b18237-combined-ca-bundle\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042181 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d41a353c-83cf-4482-9984-5197c7709ced-config-data-custom\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042270 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41a353c-83cf-4482-9984-5197c7709ced-combined-ca-bundle\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042350 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41a353c-83cf-4482-9984-5197c7709ced-config-data\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042387 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281434f1-0f91-404d-8f13-2bbf97b18237-config-data\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042425 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d41a353c-83cf-4482-9984-5197c7709ced-logs\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042455 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb9rs\" (UniqueName: \"kubernetes.io/projected/d41a353c-83cf-4482-9984-5197c7709ced-kube-api-access-lb9rs\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.042490 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnj8t\" (UniqueName: \"kubernetes.io/projected/281434f1-0f91-404d-8f13-2bbf97b18237-kube-api-access-jnj8t\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.094768 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4twt"] Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.096702 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.117157 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4twt"] Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147019 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41a353c-83cf-4482-9984-5197c7709ced-combined-ca-bundle\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147168 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41a353c-83cf-4482-9984-5197c7709ced-config-data\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147225 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281434f1-0f91-404d-8f13-2bbf97b18237-config-data\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147271 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d41a353c-83cf-4482-9984-5197c7709ced-logs\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147311 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb9rs\" (UniqueName: \"kubernetes.io/projected/d41a353c-83cf-4482-9984-5197c7709ced-kube-api-access-lb9rs\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147356 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnj8t\" (UniqueName: \"kubernetes.io/projected/281434f1-0f91-404d-8f13-2bbf97b18237-kube-api-access-jnj8t\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/281434f1-0f91-404d-8f13-2bbf97b18237-logs\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147551 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/281434f1-0f91-404d-8f13-2bbf97b18237-config-data-custom\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147618 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281434f1-0f91-404d-8f13-2bbf97b18237-combined-ca-bundle\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.147646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d41a353c-83cf-4482-9984-5197c7709ced-config-data-custom\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.156046 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/281434f1-0f91-404d-8f13-2bbf97b18237-logs\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.156346 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d41a353c-83cf-4482-9984-5197c7709ced-config-data-custom\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.156847 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41a353c-83cf-4482-9984-5197c7709ced-combined-ca-bundle\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.157112 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d41a353c-83cf-4482-9984-5197c7709ced-logs\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.161763 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/281434f1-0f91-404d-8f13-2bbf97b18237-config-data-custom\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.166566 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41a353c-83cf-4482-9984-5197c7709ced-config-data\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.167519 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb9rs\" (UniqueName: \"kubernetes.io/projected/d41a353c-83cf-4482-9984-5197c7709ced-kube-api-access-lb9rs\") pod \"barbican-keystone-listener-7b4ddcfc8d-7ttth\" (UID: \"d41a353c-83cf-4482-9984-5197c7709ced\") " pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.175406 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/281434f1-0f91-404d-8f13-2bbf97b18237-combined-ca-bundle\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.178238 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnj8t\" (UniqueName: \"kubernetes.io/projected/281434f1-0f91-404d-8f13-2bbf97b18237-kube-api-access-jnj8t\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.185262 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/281434f1-0f91-404d-8f13-2bbf97b18237-config-data\") pod \"barbican-worker-57459d985f-7pt4c\" (UID: \"281434f1-0f91-404d-8f13-2bbf97b18237\") " pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.238785 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d67cbc658-l5567"] Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.239307 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57459d985f-7pt4c" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.242299 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.247884 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.250577 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djglj\" (UniqueName: \"kubernetes.io/projected/2255858c-cd09-4ef5-b023-195188f6f4d8-kube-api-access-djglj\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.250769 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.250858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.251015 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.251120 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.251176 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-config\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.263295 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.284714 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d67cbc658-l5567"] Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.353173 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.353333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.353389 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpx55\" (UniqueName: \"kubernetes.io/projected/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-kube-api-access-dpx55\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.353414 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-logs\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.353518 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.353986 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-config\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.354029 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djglj\" (UniqueName: \"kubernetes.io/projected/2255858c-cd09-4ef5-b023-195188f6f4d8-kube-api-access-djglj\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.354102 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.354135 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data-custom\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.354201 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.354256 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-combined-ca-bundle\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.354691 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.355174 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-config\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.355541 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.357086 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.358504 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.378546 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djglj\" (UniqueName: \"kubernetes.io/projected/2255858c-cd09-4ef5-b023-195188f6f4d8-kube-api-access-djglj\") pod \"dnsmasq-dns-85ff748b95-h4twt\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.444031 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.457002 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.457150 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpx55\" (UniqueName: \"kubernetes.io/projected/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-kube-api-access-dpx55\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.457184 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-logs\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.457293 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data-custom\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.457405 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-combined-ca-bundle\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.457707 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-logs\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.462090 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-combined-ca-bundle\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.463321 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.465832 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data-custom\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.478628 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpx55\" (UniqueName: \"kubernetes.io/projected/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-kube-api-access-dpx55\") pod \"barbican-api-d67cbc658-l5567\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.618880 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.770286 4853 generic.go:334] "Generic (PLEG): container finished" podID="c09d1c23-e621-474b-ac1f-554a69baff26" containerID="d05a593015fd7316887edcc3614be3bc520fe2b79eee8c72b1e143203475162f" exitCode=0 Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.770440 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" event={"ID":"c09d1c23-e621-474b-ac1f-554a69baff26","Type":"ContainerDied","Data":"d05a593015fd7316887edcc3614be3bc520fe2b79eee8c72b1e143203475162f"} Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.797050 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.811555 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 17:21:06 crc kubenswrapper[4853]: I1209 17:21:06.826014 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 17:21:07 crc kubenswrapper[4853]: I1209 17:21:07.412852 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" podUID="c09d1c23-e621-474b-ac1f-554a69baff26" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.186:5353: connect: connection refused" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.240852 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k5v9z" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.306435 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjtb4\" (UniqueName: \"kubernetes.io/projected/20655566-5ed0-4732-835a-0bd04a51988f-kube-api-access-qjtb4\") pod \"20655566-5ed0-4732-835a-0bd04a51988f\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.306589 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-config-data\") pod \"20655566-5ed0-4732-835a-0bd04a51988f\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.306768 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-combined-ca-bundle\") pod \"20655566-5ed0-4732-835a-0bd04a51988f\" (UID: \"20655566-5ed0-4732-835a-0bd04a51988f\") " Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.336922 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20655566-5ed0-4732-835a-0bd04a51988f-kube-api-access-qjtb4" (OuterVolumeSpecName: "kube-api-access-qjtb4") pod "20655566-5ed0-4732-835a-0bd04a51988f" (UID: "20655566-5ed0-4732-835a-0bd04a51988f"). InnerVolumeSpecName "kube-api-access-qjtb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.362691 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20655566-5ed0-4732-835a-0bd04a51988f" (UID: "20655566-5ed0-4732-835a-0bd04a51988f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.410794 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.410827 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjtb4\" (UniqueName: \"kubernetes.io/projected/20655566-5ed0-4732-835a-0bd04a51988f-kube-api-access-qjtb4\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.460045 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-config-data" (OuterVolumeSpecName: "config-data") pod "20655566-5ed0-4732-835a-0bd04a51988f" (UID: "20655566-5ed0-4732-835a-0bd04a51988f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.514143 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20655566-5ed0-4732-835a-0bd04a51988f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.795497 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k5v9z" event={"ID":"20655566-5ed0-4732-835a-0bd04a51988f","Type":"ContainerDied","Data":"0f1bd4614592922de079ab5e3030d2c5d491f6f98ad36f1acbb785fb46c77f95"} Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.795807 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f1bd4614592922de079ab5e3030d2c5d491f6f98ad36f1acbb785fb46c77f95" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.795550 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k5v9z" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.854448 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.922737 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klcrz\" (UniqueName: \"kubernetes.io/projected/c09d1c23-e621-474b-ac1f-554a69baff26-kube-api-access-klcrz\") pod \"c09d1c23-e621-474b-ac1f-554a69baff26\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.922870 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-swift-storage-0\") pod \"c09d1c23-e621-474b-ac1f-554a69baff26\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.922893 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-nb\") pod \"c09d1c23-e621-474b-ac1f-554a69baff26\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.922929 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-svc\") pod \"c09d1c23-e621-474b-ac1f-554a69baff26\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.923008 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-sb\") pod \"c09d1c23-e621-474b-ac1f-554a69baff26\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.923072 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-config\") pod \"c09d1c23-e621-474b-ac1f-554a69baff26\" (UID: \"c09d1c23-e621-474b-ac1f-554a69baff26\") " Dec 09 17:21:08 crc kubenswrapper[4853]: I1209 17:21:08.935362 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09d1c23-e621-474b-ac1f-554a69baff26-kube-api-access-klcrz" (OuterVolumeSpecName: "kube-api-access-klcrz") pod "c09d1c23-e621-474b-ac1f-554a69baff26" (UID: "c09d1c23-e621-474b-ac1f-554a69baff26"). InnerVolumeSpecName "kube-api-access-klcrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.043343 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klcrz\" (UniqueName: \"kubernetes.io/projected/c09d1c23-e621-474b-ac1f-554a69baff26-kube-api-access-klcrz\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.082133 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c09d1c23-e621-474b-ac1f-554a69baff26" (UID: "c09d1c23-e621-474b-ac1f-554a69baff26"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.127786 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c09d1c23-e621-474b-ac1f-554a69baff26" (UID: "c09d1c23-e621-474b-ac1f-554a69baff26"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.129263 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-config" (OuterVolumeSpecName: "config") pod "c09d1c23-e621-474b-ac1f-554a69baff26" (UID: "c09d1c23-e621-474b-ac1f-554a69baff26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.141246 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c09d1c23-e621-474b-ac1f-554a69baff26" (UID: "c09d1c23-e621-474b-ac1f-554a69baff26"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.147567 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.147619 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.147630 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.147645 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.184929 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c09d1c23-e621-474b-ac1f-554a69baff26" (UID: "c09d1c23-e621-474b-ac1f-554a69baff26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.220717 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-767f9884cb-lxgw6"] Dec 09 17:21:09 crc kubenswrapper[4853]: E1209 17:21:09.221369 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d1c23-e621-474b-ac1f-554a69baff26" containerName="init" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.221387 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d1c23-e621-474b-ac1f-554a69baff26" containerName="init" Dec 09 17:21:09 crc kubenswrapper[4853]: E1209 17:21:09.221403 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09d1c23-e621-474b-ac1f-554a69baff26" containerName="dnsmasq-dns" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.221411 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09d1c23-e621-474b-ac1f-554a69baff26" containerName="dnsmasq-dns" Dec 09 17:21:09 crc kubenswrapper[4853]: E1209 17:21:09.221425 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20655566-5ed0-4732-835a-0bd04a51988f" containerName="heat-db-sync" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.221434 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="20655566-5ed0-4732-835a-0bd04a51988f" containerName="heat-db-sync" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.221811 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="20655566-5ed0-4732-835a-0bd04a51988f" containerName="heat-db-sync" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.221829 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09d1c23-e621-474b-ac1f-554a69baff26" containerName="dnsmasq-dns" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.224568 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.228852 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.229274 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.234745 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-767f9884cb-lxgw6"] Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.254573 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09d1c23-e621-474b-ac1f-554a69baff26-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.394903 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-config-data-custom\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.394970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-public-tls-certs\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.395081 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-internal-tls-certs\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.395113 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08bb4256-cdbf-4359-a670-8cfc13b8af47-logs\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.395146 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbz5p\" (UniqueName: \"kubernetes.io/projected/08bb4256-cdbf-4359-a670-8cfc13b8af47-kube-api-access-nbz5p\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.395290 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-combined-ca-bundle\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.395391 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-config-data\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.499802 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-config-data-custom\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.499852 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-public-tls-certs\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.499892 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-internal-tls-certs\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.499914 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08bb4256-cdbf-4359-a670-8cfc13b8af47-logs\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.499934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbz5p\" (UniqueName: \"kubernetes.io/projected/08bb4256-cdbf-4359-a670-8cfc13b8af47-kube-api-access-nbz5p\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.499993 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-combined-ca-bundle\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.500035 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-config-data\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.509579 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08bb4256-cdbf-4359-a670-8cfc13b8af47-logs\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.531868 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-config-data\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.549162 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-internal-tls-certs\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.549790 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-config-data-custom\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.556281 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbz5p\" (UniqueName: \"kubernetes.io/projected/08bb4256-cdbf-4359-a670-8cfc13b8af47-kube-api-access-nbz5p\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.556935 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-combined-ca-bundle\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.562379 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08bb4256-cdbf-4359-a670-8cfc13b8af47-public-tls-certs\") pod \"barbican-api-767f9884cb-lxgw6\" (UID: \"08bb4256-cdbf-4359-a670-8cfc13b8af47\") " pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.702134 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.804035 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5496596656-4sjvd"] Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.936484 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" event={"ID":"c09d1c23-e621-474b-ac1f-554a69baff26","Type":"ContainerDied","Data":"c5f8bc12f791f3e5c520f102fb48488c12852aeff004f18ad47afd9d1db7637c"} Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.936756 4853 scope.go:117] "RemoveContainer" containerID="d05a593015fd7316887edcc3614be3bc520fe2b79eee8c72b1e143203475162f" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.936898 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-2k95k" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.985795 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2k95k"] Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.988920 4853 scope.go:117] "RemoveContainer" containerID="864fbb795ee8c67fee70ea9e1f4d8f4c25de89e3935eabae7e7e69dbb994f7eb" Dec 09 17:21:09 crc kubenswrapper[4853]: I1209 17:21:09.996311 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-2k95k"] Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.123904 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.326302 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth"] Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.346880 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65d74d4db4-2bcdr"] Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.384246 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d67cbc658-l5567"] Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.416425 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4twt"] Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.467755 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57459d985f-7pt4c"] Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.597102 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-767f9884cb-lxgw6"] Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.949055 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5496596656-4sjvd" event={"ID":"9571bd10-147c-4016-af2c-0dc4df16ae63","Type":"ContainerStarted","Data":"7455aed1357b4716e75783ff2df9bd56c2deecf0296c5e25dfd79ecfe17d6dc1"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.949360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5496596656-4sjvd" event={"ID":"9571bd10-147c-4016-af2c-0dc4df16ae63","Type":"ContainerStarted","Data":"577aee6c3145140a9e0c842f1efa603b82fda6edbf18c12fd7426ce65ba29758"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.949374 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5496596656-4sjvd" event={"ID":"9571bd10-147c-4016-af2c-0dc4df16ae63","Type":"ContainerStarted","Data":"369190006ce447051ec3a29785184536a4448b0ee11398529a55ba77276d7e5e"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.949490 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.949505 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.952377 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65d74d4db4-2bcdr" event={"ID":"e89854ae-ff97-4850-992f-14c38c2e1848","Type":"ContainerStarted","Data":"00a6020d6a9564db22cea226ba9fd9620d282b2cc514be9e01ac4e4a2d87af9f"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.952421 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65d74d4db4-2bcdr" event={"ID":"e89854ae-ff97-4850-992f-14c38c2e1848","Type":"ContainerStarted","Data":"3e0268d4b9a118ac9458f1b96e0f9ffbd067a5bb64821fbb51abb430447a1478"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.952511 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.954201 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67cbc658-l5567" event={"ID":"08b1f5b0-2c5b-4216-9ff7-3206ca910e68","Type":"ContainerStarted","Data":"6c2852c4c514547a84e0feb7be541614703115c92d9c083c482658dd689dde38"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.954252 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67cbc658-l5567" event={"ID":"08b1f5b0-2c5b-4216-9ff7-3206ca910e68","Type":"ContainerStarted","Data":"a377f2a871a2d6313d027870eaf1abd08530f3114dd2c9b0a5de892859246c68"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.957060 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" event={"ID":"d41a353c-83cf-4482-9984-5197c7709ced","Type":"ContainerStarted","Data":"664b6832e0c4d1604da7ba75fa51cd40601860b9cce1c08e08e7b80c1f31f077"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.959548 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cb5f280-f2fd-425a-adc9-58ef46c2afa1","Type":"ContainerStarted","Data":"6602855f94674d4315f02fc01b237ef16da1b2cc3afd081ba4478a3e7deab365"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.961476 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57459d985f-7pt4c" event={"ID":"281434f1-0f91-404d-8f13-2bbf97b18237","Type":"ContainerStarted","Data":"9be3c304d478e2005198732a1a6ee5205553a35fe9399970b8c987af5f2dd41d"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.963175 4853 generic.go:334] "Generic (PLEG): container finished" podID="2255858c-cd09-4ef5-b023-195188f6f4d8" containerID="c60d52c616c24569dee6075819a353f582c886eda1ceeb1a5ba7d177aa0b7f7a" exitCode=0 Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.963231 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" event={"ID":"2255858c-cd09-4ef5-b023-195188f6f4d8","Type":"ContainerDied","Data":"c60d52c616c24569dee6075819a353f582c886eda1ceeb1a5ba7d177aa0b7f7a"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.963295 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" event={"ID":"2255858c-cd09-4ef5-b023-195188f6f4d8","Type":"ContainerStarted","Data":"844242f749e9ce2d3dccaf8fd2ae8677d131e65e16059ba5cf7466dfb2f9a0c7"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.965861 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-767f9884cb-lxgw6" event={"ID":"08bb4256-cdbf-4359-a670-8cfc13b8af47","Type":"ContainerStarted","Data":"7797b0331d5b9e7cfd127a05c47d1b7abb9e38113655899bdc788f20c72da0a3"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.965895 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-767f9884cb-lxgw6" event={"ID":"08bb4256-cdbf-4359-a670-8cfc13b8af47","Type":"ContainerStarted","Data":"f4971d40cf8849f57da23fcf01ca0a92e093cbd88f81fc08e0207ae41d846007"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.972334 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tpt7s" event={"ID":"18c4cb93-d59f-4160-9e4d-506184f49afe","Type":"ContainerStarted","Data":"4fd5170f06d48368f9a3a457b88be65264b2380f3cd23b303926251e091dcbef"} Dec 09 17:21:10 crc kubenswrapper[4853]: I1209 17:21:10.995466 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5496596656-4sjvd" podStartSLOduration=6.995447192 podStartE2EDuration="6.995447192s" podCreationTimestamp="2025-12-09 17:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:10.97571941 +0000 UTC m=+1497.910458592" watchObservedRunningTime="2025-12-09 17:21:10.995447192 +0000 UTC m=+1497.930186374" Dec 09 17:21:11 crc kubenswrapper[4853]: I1209 17:21:11.029892 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tpt7s" podStartSLOduration=5.474481497 podStartE2EDuration="51.029870935s" podCreationTimestamp="2025-12-09 17:20:20 +0000 UTC" firstStartedPulling="2025-12-09 17:20:23.522582235 +0000 UTC m=+1450.457321407" lastFinishedPulling="2025-12-09 17:21:09.077971653 +0000 UTC m=+1496.012710845" observedRunningTime="2025-12-09 17:21:11.000182814 +0000 UTC m=+1497.934922006" watchObservedRunningTime="2025-12-09 17:21:11.029870935 +0000 UTC m=+1497.964610107" Dec 09 17:21:11 crc kubenswrapper[4853]: I1209 17:21:11.052120 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-65d74d4db4-2bcdr" podStartSLOduration=10.052099917 podStartE2EDuration="10.052099917s" podCreationTimestamp="2025-12-09 17:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:11.023753804 +0000 UTC m=+1497.958492986" watchObservedRunningTime="2025-12-09 17:21:11.052099917 +0000 UTC m=+1497.986839099" Dec 09 17:21:11 crc kubenswrapper[4853]: I1209 17:21:11.580865 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09d1c23-e621-474b-ac1f-554a69baff26" path="/var/lib/kubelet/pods/c09d1c23-e621-474b-ac1f-554a69baff26/volumes" Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.018160 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" event={"ID":"2255858c-cd09-4ef5-b023-195188f6f4d8","Type":"ContainerStarted","Data":"5ab0579c4adbb3944a1ca01469aa60778f178a9f87d46308a68c3e74df4d29c4"} Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.018758 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.020129 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-767f9884cb-lxgw6" event={"ID":"08bb4256-cdbf-4359-a670-8cfc13b8af47","Type":"ContainerStarted","Data":"e5390d4898be11fc77c37806f25c94e27439e8e89d47231485e9d0815222a956"} Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.020627 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.021582 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.025002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67cbc658-l5567" event={"ID":"08b1f5b0-2c5b-4216-9ff7-3206ca910e68","Type":"ContainerStarted","Data":"fa6da9c62b193ef3e9c1b4ff9e84e21a5ac99f147b9619cc8fd7ff565afd075d"} Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.025715 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.025807 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.039150 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" podStartSLOduration=8.039129103 podStartE2EDuration="8.039129103s" podCreationTimestamp="2025-12-09 17:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:13.038613538 +0000 UTC m=+1499.973352720" watchObservedRunningTime="2025-12-09 17:21:13.039129103 +0000 UTC m=+1499.973868285" Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.079962 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d67cbc658-l5567" podStartSLOduration=7.079943125 podStartE2EDuration="7.079943125s" podCreationTimestamp="2025-12-09 17:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:13.07654342 +0000 UTC m=+1500.011282602" watchObservedRunningTime="2025-12-09 17:21:13.079943125 +0000 UTC m=+1500.014682297" Dec 09 17:21:13 crc kubenswrapper[4853]: I1209 17:21:13.624793 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-767f9884cb-lxgw6" podStartSLOduration=4.624772882 podStartE2EDuration="4.624772882s" podCreationTimestamp="2025-12-09 17:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:13.098516035 +0000 UTC m=+1500.033255217" watchObservedRunningTime="2025-12-09 17:21:13.624772882 +0000 UTC m=+1500.559512064" Dec 09 17:21:15 crc kubenswrapper[4853]: I1209 17:21:15.062196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" event={"ID":"d41a353c-83cf-4482-9984-5197c7709ced","Type":"ContainerStarted","Data":"163350ad708a2c08bbbe524a841a79a68d046bcf0fd65c7a5f7e482c10e61cc1"} Dec 09 17:21:15 crc kubenswrapper[4853]: I1209 17:21:15.062730 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" event={"ID":"d41a353c-83cf-4482-9984-5197c7709ced","Type":"ContainerStarted","Data":"e826c7bdfd96d7df40d6e5604a7003bac3384a97cf9b13cb0c99cfc5cf2aa628"} Dec 09 17:21:15 crc kubenswrapper[4853]: I1209 17:21:15.065628 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57459d985f-7pt4c" event={"ID":"281434f1-0f91-404d-8f13-2bbf97b18237","Type":"ContainerStarted","Data":"26da0e619b78e5ec08541ad45599cbf7b5e316c975bb84a16f8481bbe3957aa2"} Dec 09 17:21:15 crc kubenswrapper[4853]: I1209 17:21:15.102477 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b4ddcfc8d-7ttth" podStartSLOduration=6.5914743510000005 podStartE2EDuration="10.102458984s" podCreationTimestamp="2025-12-09 17:21:05 +0000 UTC" firstStartedPulling="2025-12-09 17:21:10.354974408 +0000 UTC m=+1497.289713590" lastFinishedPulling="2025-12-09 17:21:13.865959041 +0000 UTC m=+1500.800698223" observedRunningTime="2025-12-09 17:21:15.084307276 +0000 UTC m=+1502.019046458" watchObservedRunningTime="2025-12-09 17:21:15.102458984 +0000 UTC m=+1502.037198166" Dec 09 17:21:16 crc kubenswrapper[4853]: I1209 17:21:16.077933 4853 generic.go:334] "Generic (PLEG): container finished" podID="18c4cb93-d59f-4160-9e4d-506184f49afe" containerID="4fd5170f06d48368f9a3a457b88be65264b2380f3cd23b303926251e091dcbef" exitCode=0 Dec 09 17:21:16 crc kubenswrapper[4853]: I1209 17:21:16.077990 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tpt7s" event={"ID":"18c4cb93-d59f-4160-9e4d-506184f49afe","Type":"ContainerDied","Data":"4fd5170f06d48368f9a3a457b88be65264b2380f3cd23b303926251e091dcbef"} Dec 09 17:21:16 crc kubenswrapper[4853]: I1209 17:21:16.080230 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57459d985f-7pt4c" event={"ID":"281434f1-0f91-404d-8f13-2bbf97b18237","Type":"ContainerStarted","Data":"c594fc3db2b380f41fbbb8451cbcfabf723221fb1d3730b6f324e181fc504713"} Dec 09 17:21:16 crc kubenswrapper[4853]: I1209 17:21:16.114354 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-57459d985f-7pt4c" podStartSLOduration=7.134237859 podStartE2EDuration="11.11433407s" podCreationTimestamp="2025-12-09 17:21:05 +0000 UTC" firstStartedPulling="2025-12-09 17:21:10.499680978 +0000 UTC m=+1497.434420160" lastFinishedPulling="2025-12-09 17:21:14.479777189 +0000 UTC m=+1501.414516371" observedRunningTime="2025-12-09 17:21:16.109219768 +0000 UTC m=+1503.043958970" watchObservedRunningTime="2025-12-09 17:21:16.11433407 +0000 UTC m=+1503.049073252" Dec 09 17:21:16 crc kubenswrapper[4853]: I1209 17:21:16.779578 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.020647 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.519630 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.610497 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-db-sync-config-data\") pod \"18c4cb93-d59f-4160-9e4d-506184f49afe\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.610551 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-combined-ca-bundle\") pod \"18c4cb93-d59f-4160-9e4d-506184f49afe\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.610629 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18c4cb93-d59f-4160-9e4d-506184f49afe-etc-machine-id\") pod \"18c4cb93-d59f-4160-9e4d-506184f49afe\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.610732 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-config-data\") pod \"18c4cb93-d59f-4160-9e4d-506184f49afe\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.610759 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-scripts\") pod \"18c4cb93-d59f-4160-9e4d-506184f49afe\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.610809 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c4cb93-d59f-4160-9e4d-506184f49afe-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "18c4cb93-d59f-4160-9e4d-506184f49afe" (UID: "18c4cb93-d59f-4160-9e4d-506184f49afe"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.610825 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbr7q\" (UniqueName: \"kubernetes.io/projected/18c4cb93-d59f-4160-9e4d-506184f49afe-kube-api-access-kbr7q\") pod \"18c4cb93-d59f-4160-9e4d-506184f49afe\" (UID: \"18c4cb93-d59f-4160-9e4d-506184f49afe\") " Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.611247 4853 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/18c4cb93-d59f-4160-9e4d-506184f49afe-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.620872 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c4cb93-d59f-4160-9e4d-506184f49afe-kube-api-access-kbr7q" (OuterVolumeSpecName: "kube-api-access-kbr7q") pod "18c4cb93-d59f-4160-9e4d-506184f49afe" (UID: "18c4cb93-d59f-4160-9e4d-506184f49afe"). InnerVolumeSpecName "kube-api-access-kbr7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.620893 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "18c4cb93-d59f-4160-9e4d-506184f49afe" (UID: "18c4cb93-d59f-4160-9e4d-506184f49afe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.627070 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-scripts" (OuterVolumeSpecName: "scripts") pod "18c4cb93-d59f-4160-9e4d-506184f49afe" (UID: "18c4cb93-d59f-4160-9e4d-506184f49afe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.666701 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18c4cb93-d59f-4160-9e4d-506184f49afe" (UID: "18c4cb93-d59f-4160-9e4d-506184f49afe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.695614 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-config-data" (OuterVolumeSpecName: "config-data") pod "18c4cb93-d59f-4160-9e4d-506184f49afe" (UID: "18c4cb93-d59f-4160-9e4d-506184f49afe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.713036 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.713071 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.713081 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbr7q\" (UniqueName: \"kubernetes.io/projected/18c4cb93-d59f-4160-9e4d-506184f49afe-kube-api-access-kbr7q\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.713092 4853 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:18 crc kubenswrapper[4853]: I1209 17:21:18.713102 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c4cb93-d59f-4160-9e4d-506184f49afe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:19 crc kubenswrapper[4853]: I1209 17:21:19.132189 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tpt7s" event={"ID":"18c4cb93-d59f-4160-9e4d-506184f49afe","Type":"ContainerDied","Data":"f9951be682dfe5c8174effe7a6476dbb00c98c1ec26525616fdc265358037e03"} Dec 09 17:21:19 crc kubenswrapper[4853]: I1209 17:21:19.132240 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9951be682dfe5c8174effe7a6476dbb00c98c1ec26525616fdc265358037e03" Dec 09 17:21:19 crc kubenswrapper[4853]: I1209 17:21:19.132312 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tpt7s" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.177093 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.288443 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 17:21:20 crc kubenswrapper[4853]: E1209 17:21:20.289253 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c4cb93-d59f-4160-9e4d-506184f49afe" containerName="cinder-db-sync" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.289270 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c4cb93-d59f-4160-9e4d-506184f49afe" containerName="cinder-db-sync" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.289482 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c4cb93-d59f-4160-9e4d-506184f49afe" containerName="cinder-db-sync" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.290828 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.299052 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.300289 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.300409 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wj6zm" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.300607 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.320477 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.439153 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4twt"] Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.439406 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" podUID="2255858c-cd09-4ef5-b023-195188f6f4d8" containerName="dnsmasq-dns" containerID="cri-o://5ab0579c4adbb3944a1ca01469aa60778f178a9f87d46308a68c3e74df4d29c4" gracePeriod=10 Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.440808 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.501311 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.501453 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54w5x\" (UniqueName: \"kubernetes.io/projected/a548ed8e-f015-4c35-817a-a00733948bd6-kube-api-access-54w5x\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.501528 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-scripts\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.501688 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.501802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a548ed8e-f015-4c35-817a-a00733948bd6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.501837 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.504211 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hh78g"] Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.515108 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.575713 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hh78g"] Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.613270 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-scripts\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.613394 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k6lh\" (UniqueName: \"kubernetes.io/projected/72ebed7f-14e0-4b36-bf40-f66e71e044b6-kube-api-access-4k6lh\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.613680 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.613712 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.613789 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a548ed8e-f015-4c35-817a-a00733948bd6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.613922 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.614082 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-config\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.614231 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.614393 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.614533 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.614725 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a548ed8e-f015-4c35-817a-a00733948bd6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.630475 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.630628 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54w5x\" (UniqueName: \"kubernetes.io/projected/a548ed8e-f015-4c35-817a-a00733948bd6-kube-api-access-54w5x\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.660713 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.664374 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-scripts\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.678366 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.682201 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54w5x\" (UniqueName: \"kubernetes.io/projected/a548ed8e-f015-4c35-817a-a00733948bd6-kube-api-access-54w5x\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.682818 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.737922 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-config\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.737988 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.738026 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.738054 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.738126 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k6lh\" (UniqueName: \"kubernetes.io/projected/72ebed7f-14e0-4b36-bf40-f66e71e044b6-kube-api-access-4k6lh\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.738161 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.738979 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.739650 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.739755 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.739854 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-config\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.740383 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.767376 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k6lh\" (UniqueName: \"kubernetes.io/projected/72ebed7f-14e0-4b36-bf40-f66e71e044b6-kube-api-access-4k6lh\") pod \"dnsmasq-dns-5c9776ccc5-hh78g\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.784668 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.787323 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.808436 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.839899 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-scripts\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.839932 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.839984 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.844521 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.849412 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-logs\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.849565 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89k2m\" (UniqueName: \"kubernetes.io/projected/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-kube-api-access-89k2m\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.849637 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.849704 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.922179 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.932830 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.953143 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.953226 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-logs\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.953260 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89k2m\" (UniqueName: \"kubernetes.io/projected/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-kube-api-access-89k2m\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.953285 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.953315 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.953404 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-scripts\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.953419 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.958090 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.958149 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.958637 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-logs\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.965198 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.966062 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-scripts\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.974345 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:20 crc kubenswrapper[4853]: I1209 17:21:20.989959 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89k2m\" (UniqueName: \"kubernetes.io/projected/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-kube-api-access-89k2m\") pod \"cinder-api-0\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " pod="openstack/cinder-api-0" Dec 09 17:21:21 crc kubenswrapper[4853]: I1209 17:21:21.188531 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 17:21:21 crc kubenswrapper[4853]: I1209 17:21:21.230690 4853 generic.go:334] "Generic (PLEG): container finished" podID="2255858c-cd09-4ef5-b023-195188f6f4d8" containerID="5ab0579c4adbb3944a1ca01469aa60778f178a9f87d46308a68c3e74df4d29c4" exitCode=0 Dec 09 17:21:21 crc kubenswrapper[4853]: I1209 17:21:21.230747 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" event={"ID":"2255858c-cd09-4ef5-b023-195188f6f4d8","Type":"ContainerDied","Data":"5ab0579c4adbb3944a1ca01469aa60778f178a9f87d46308a68c3e74df4d29c4"} Dec 09 17:21:21 crc kubenswrapper[4853]: I1209 17:21:21.445009 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" podUID="2255858c-cd09-4ef5-b023-195188f6f4d8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: connect: connection refused" Dec 09 17:21:22 crc kubenswrapper[4853]: I1209 17:21:22.339053 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-767f9884cb-lxgw6" Dec 09 17:21:22 crc kubenswrapper[4853]: I1209 17:21:22.476215 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d67cbc658-l5567"] Dec 09 17:21:22 crc kubenswrapper[4853]: I1209 17:21:22.495051 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d67cbc658-l5567" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api-log" containerID="cri-o://6c2852c4c514547a84e0feb7be541614703115c92d9c083c482658dd689dde38" gracePeriod=30 Dec 09 17:21:22 crc kubenswrapper[4853]: I1209 17:21:22.495190 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d67cbc658-l5567" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api" containerID="cri-o://fa6da9c62b193ef3e9c1b4ff9e84e21a5ac99f147b9619cc8fd7ff565afd075d" gracePeriod=30 Dec 09 17:21:22 crc kubenswrapper[4853]: I1209 17:21:22.512970 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d67cbc658-l5567" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.194:9311/healthcheck\": EOF" Dec 09 17:21:22 crc kubenswrapper[4853]: I1209 17:21:22.513280 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d67cbc658-l5567" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.194:9311/healthcheck\": EOF" Dec 09 17:21:22 crc kubenswrapper[4853]: I1209 17:21:22.513372 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d67cbc658-l5567" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.194:9311/healthcheck\": EOF" Dec 09 17:21:22 crc kubenswrapper[4853]: I1209 17:21:22.567556 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:21:22 crc kubenswrapper[4853]: I1209 17:21:22.794427 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 17:21:23 crc kubenswrapper[4853]: I1209 17:21:23.258550 4853 generic.go:334] "Generic (PLEG): container finished" podID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerID="6c2852c4c514547a84e0feb7be541614703115c92d9c083c482658dd689dde38" exitCode=143 Dec 09 17:21:23 crc kubenswrapper[4853]: I1209 17:21:23.258590 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67cbc658-l5567" event={"ID":"08b1f5b0-2c5b-4216-9ff7-3206ca910e68","Type":"ContainerDied","Data":"6c2852c4c514547a84e0feb7be541614703115c92d9c083c482658dd689dde38"} Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.305824 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.404255 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-config\") pod \"2255858c-cd09-4ef5-b023-195188f6f4d8\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.404363 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-swift-storage-0\") pod \"2255858c-cd09-4ef5-b023-195188f6f4d8\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.404570 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-nb\") pod \"2255858c-cd09-4ef5-b023-195188f6f4d8\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.404679 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djglj\" (UniqueName: \"kubernetes.io/projected/2255858c-cd09-4ef5-b023-195188f6f4d8-kube-api-access-djglj\") pod \"2255858c-cd09-4ef5-b023-195188f6f4d8\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.404713 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-svc\") pod \"2255858c-cd09-4ef5-b023-195188f6f4d8\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.404788 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-sb\") pod \"2255858c-cd09-4ef5-b023-195188f6f4d8\" (UID: \"2255858c-cd09-4ef5-b023-195188f6f4d8\") " Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.455003 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2255858c-cd09-4ef5-b023-195188f6f4d8-kube-api-access-djglj" (OuterVolumeSpecName: "kube-api-access-djglj") pod "2255858c-cd09-4ef5-b023-195188f6f4d8" (UID: "2255858c-cd09-4ef5-b023-195188f6f4d8"). InnerVolumeSpecName "kube-api-access-djglj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.511431 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djglj\" (UniqueName: \"kubernetes.io/projected/2255858c-cd09-4ef5-b023-195188f6f4d8-kube-api-access-djglj\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:24 crc kubenswrapper[4853]: E1209 17:21:24.607404 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" Dec 09 17:21:24 crc kubenswrapper[4853]: W1209 17:21:24.616781 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72ebed7f_14e0_4b36_bf40_f66e71e044b6.slice/crio-14fe3409df95b5604d8e320c61a8d378acb48b9e8626a7d6b268e2b77df76c6b WatchSource:0}: Error finding container 14fe3409df95b5604d8e320c61a8d378acb48b9e8626a7d6b268e2b77df76c6b: Status 404 returned error can't find the container with id 14fe3409df95b5604d8e320c61a8d378acb48b9e8626a7d6b268e2b77df76c6b Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.617440 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hh78g"] Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.620494 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2255858c-cd09-4ef5-b023-195188f6f4d8" (UID: "2255858c-cd09-4ef5-b023-195188f6f4d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.621895 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2255858c-cd09-4ef5-b023-195188f6f4d8" (UID: "2255858c-cd09-4ef5-b023-195188f6f4d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.645825 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-config" (OuterVolumeSpecName: "config") pod "2255858c-cd09-4ef5-b023-195188f6f4d8" (UID: "2255858c-cd09-4ef5-b023-195188f6f4d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.647640 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2255858c-cd09-4ef5-b023-195188f6f4d8" (UID: "2255858c-cd09-4ef5-b023-195188f6f4d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.652520 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2255858c-cd09-4ef5-b023-195188f6f4d8" (UID: "2255858c-cd09-4ef5-b023-195188f6f4d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.719320 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.719351 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.719362 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.719375 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.719386 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2255858c-cd09-4ef5-b023-195188f6f4d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.772167 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.784129 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.819982 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-645985f88c-fqxds" Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.893709 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6684749cc6-45h7r"] Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.894851 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6684749cc6-45h7r" podUID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerName="neutron-api" containerID="cri-o://3f32fc6c70afbf4e6f684b901c7682c1de73579e865c7ed2a6ff74901c708bf2" gracePeriod=30 Dec 09 17:21:24 crc kubenswrapper[4853]: I1209 17:21:24.895698 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6684749cc6-45h7r" podUID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerName="neutron-httpd" containerID="cri-o://2cdbe0b9fbd8f63f92598fac2e6d85452e540640ffdb246be4ba33eeefa48181" gracePeriod=30 Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.315455 4853 generic.go:334] "Generic (PLEG): container finished" podID="72ebed7f-14e0-4b36-bf40-f66e71e044b6" containerID="da924114d938d77800a9374479a024bc65bd3441e827ed8337b3aa639aa07430" exitCode=0 Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.315533 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" event={"ID":"72ebed7f-14e0-4b36-bf40-f66e71e044b6","Type":"ContainerDied","Data":"da924114d938d77800a9374479a024bc65bd3441e827ed8337b3aa639aa07430"} Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.315566 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" event={"ID":"72ebed7f-14e0-4b36-bf40-f66e71e044b6","Type":"ContainerStarted","Data":"14fe3409df95b5604d8e320c61a8d378acb48b9e8626a7d6b268e2b77df76c6b"} Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.319707 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cb5f280-f2fd-425a-adc9-58ef46c2afa1","Type":"ContainerStarted","Data":"ec43bf599b4e2592b370c26ee44c2227516e4cf45c0f5357f07b3cab7764b221"} Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.319735 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="ceilometer-notification-agent" containerID="cri-o://d997511fef1a2c72ccc4c40afea69980def8c109dd15e0ba31967fbbc8c3804c" gracePeriod=30 Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.319823 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="proxy-httpd" containerID="cri-o://ec43bf599b4e2592b370c26ee44c2227516e4cf45c0f5357f07b3cab7764b221" gracePeriod=30 Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.319835 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.319861 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="sg-core" containerID="cri-o://6602855f94674d4315f02fc01b237ef16da1b2cc3afd081ba4478a3e7deab365" gracePeriod=30 Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.321424 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a548ed8e-f015-4c35-817a-a00733948bd6","Type":"ContainerStarted","Data":"c24f74850e06739f75ca5d96481d9622551278f9004fb4a2d5f93244780638b4"} Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.331113 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" event={"ID":"2255858c-cd09-4ef5-b023-195188f6f4d8","Type":"ContainerDied","Data":"844242f749e9ce2d3dccaf8fd2ae8677d131e65e16059ba5cf7466dfb2f9a0c7"} Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.331163 4853 scope.go:117] "RemoveContainer" containerID="5ab0579c4adbb3944a1ca01469aa60778f178a9f87d46308a68c3e74df4d29c4" Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.331281 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-h4twt" Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.340301 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a7a0ff-4adc-4762-9651-7ec2bffbd932","Type":"ContainerStarted","Data":"9112cf6d62350e5faf18fb3606a6d795c926a32b4544634732bb07a35e0c9480"} Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.353525 4853 generic.go:334] "Generic (PLEG): container finished" podID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerID="2cdbe0b9fbd8f63f92598fac2e6d85452e540640ffdb246be4ba33eeefa48181" exitCode=0 Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.353571 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684749cc6-45h7r" event={"ID":"3a04a5a0-6547-4374-bc53-7b0ae86adf2b","Type":"ContainerDied","Data":"2cdbe0b9fbd8f63f92598fac2e6d85452e540640ffdb246be4ba33eeefa48181"} Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.504962 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4twt"] Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.519335 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-h4twt"] Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.563397 4853 scope.go:117] "RemoveContainer" containerID="c60d52c616c24569dee6075819a353f582c886eda1ceeb1a5ba7d177aa0b7f7a" Dec 09 17:21:25 crc kubenswrapper[4853]: I1209 17:21:25.589997 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2255858c-cd09-4ef5-b023-195188f6f4d8" path="/var/lib/kubelet/pods/2255858c-cd09-4ef5-b023-195188f6f4d8/volumes" Dec 09 17:21:26 crc kubenswrapper[4853]: I1209 17:21:26.366308 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a7a0ff-4adc-4762-9651-7ec2bffbd932","Type":"ContainerStarted","Data":"e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b"} Dec 09 17:21:26 crc kubenswrapper[4853]: I1209 17:21:26.372312 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" event={"ID":"72ebed7f-14e0-4b36-bf40-f66e71e044b6","Type":"ContainerStarted","Data":"afb53e07ac2f4a761b6337eafcc668a4df5dfbebfc5914b44f23a4f9c2400359"} Dec 09 17:21:26 crc kubenswrapper[4853]: I1209 17:21:26.372409 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:26 crc kubenswrapper[4853]: I1209 17:21:26.382140 4853 generic.go:334] "Generic (PLEG): container finished" podID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerID="ec43bf599b4e2592b370c26ee44c2227516e4cf45c0f5357f07b3cab7764b221" exitCode=0 Dec 09 17:21:26 crc kubenswrapper[4853]: I1209 17:21:26.382170 4853 generic.go:334] "Generic (PLEG): container finished" podID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerID="6602855f94674d4315f02fc01b237ef16da1b2cc3afd081ba4478a3e7deab365" exitCode=2 Dec 09 17:21:26 crc kubenswrapper[4853]: I1209 17:21:26.382207 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cb5f280-f2fd-425a-adc9-58ef46c2afa1","Type":"ContainerDied","Data":"ec43bf599b4e2592b370c26ee44c2227516e4cf45c0f5357f07b3cab7764b221"} Dec 09 17:21:26 crc kubenswrapper[4853]: I1209 17:21:26.382232 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cb5f280-f2fd-425a-adc9-58ef46c2afa1","Type":"ContainerDied","Data":"6602855f94674d4315f02fc01b237ef16da1b2cc3afd081ba4478a3e7deab365"} Dec 09 17:21:26 crc kubenswrapper[4853]: I1209 17:21:26.385827 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a548ed8e-f015-4c35-817a-a00733948bd6","Type":"ContainerStarted","Data":"fd599f71251b72ef4f524c4a1b3f03d2d92dc51e92dc64f012e31f4044e852ae"} Dec 09 17:21:26 crc kubenswrapper[4853]: I1209 17:21:26.399315 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" podStartSLOduration=6.399291439 podStartE2EDuration="6.399291439s" podCreationTimestamp="2025-12-09 17:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:26.388058165 +0000 UTC m=+1513.322797347" watchObservedRunningTime="2025-12-09 17:21:26.399291439 +0000 UTC m=+1513.334030621" Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.037980 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d67cbc658-l5567" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.194:9311/healthcheck\": read tcp 10.217.0.2:46922->10.217.0.194:9311: read: connection reset by peer" Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.038043 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d67cbc658-l5567" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.194:9311/healthcheck\": read tcp 10.217.0.2:46920->10.217.0.194:9311: read: connection reset by peer" Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.420393 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a548ed8e-f015-4c35-817a-a00733948bd6","Type":"ContainerStarted","Data":"6b66928a44e6cbaf9bb3deaad1ac18ba04be19f9a173e89cdc4d0c116cde25e0"} Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.425206 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a7a0ff-4adc-4762-9651-7ec2bffbd932","Type":"ContainerStarted","Data":"2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0"} Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.427381 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerName="cinder-api" containerID="cri-o://2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0" gracePeriod=30 Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.427391 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.431792 4853 generic.go:334] "Generic (PLEG): container finished" podID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerID="fa6da9c62b193ef3e9c1b4ff9e84e21a5ac99f147b9619cc8fd7ff565afd075d" exitCode=0 Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.431953 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67cbc658-l5567" event={"ID":"08b1f5b0-2c5b-4216-9ff7-3206ca910e68","Type":"ContainerDied","Data":"fa6da9c62b193ef3e9c1b4ff9e84e21a5ac99f147b9619cc8fd7ff565afd075d"} Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.437661 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerName="cinder-api-log" containerID="cri-o://e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b" gracePeriod=30 Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.459285 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.620437167 podStartE2EDuration="7.459267062s" podCreationTimestamp="2025-12-09 17:21:20 +0000 UTC" firstStartedPulling="2025-12-09 17:21:24.790397725 +0000 UTC m=+1511.725136907" lastFinishedPulling="2025-12-09 17:21:25.629227619 +0000 UTC m=+1512.563966802" observedRunningTime="2025-12-09 17:21:27.459073177 +0000 UTC m=+1514.393812429" watchObservedRunningTime="2025-12-09 17:21:27.459267062 +0000 UTC m=+1514.394006244" Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.490841 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.490820395 podStartE2EDuration="7.490820395s" podCreationTimestamp="2025-12-09 17:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:27.486637008 +0000 UTC m=+1514.421376190" watchObservedRunningTime="2025-12-09 17:21:27.490820395 +0000 UTC m=+1514.425559577" Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.794495 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.928160 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpx55\" (UniqueName: \"kubernetes.io/projected/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-kube-api-access-dpx55\") pod \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.928264 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-logs\") pod \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.928297 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-combined-ca-bundle\") pod \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.928395 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data\") pod \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.928476 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data-custom\") pod \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\" (UID: \"08b1f5b0-2c5b-4216-9ff7-3206ca910e68\") " Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.931581 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-logs" (OuterVolumeSpecName: "logs") pod "08b1f5b0-2c5b-4216-9ff7-3206ca910e68" (UID: "08b1f5b0-2c5b-4216-9ff7-3206ca910e68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.942153 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-kube-api-access-dpx55" (OuterVolumeSpecName: "kube-api-access-dpx55") pod "08b1f5b0-2c5b-4216-9ff7-3206ca910e68" (UID: "08b1f5b0-2c5b-4216-9ff7-3206ca910e68"). InnerVolumeSpecName "kube-api-access-dpx55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:27 crc kubenswrapper[4853]: I1209 17:21:27.946744 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08b1f5b0-2c5b-4216-9ff7-3206ca910e68" (UID: "08b1f5b0-2c5b-4216-9ff7-3206ca910e68"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.002685 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data" (OuterVolumeSpecName: "config-data") pod "08b1f5b0-2c5b-4216-9ff7-3206ca910e68" (UID: "08b1f5b0-2c5b-4216-9ff7-3206ca910e68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.020736 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.031404 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.031432 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.031442 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpx55\" (UniqueName: \"kubernetes.io/projected/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-kube-api-access-dpx55\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.031451 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.032020 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08b1f5b0-2c5b-4216-9ff7-3206ca910e68" (UID: "08b1f5b0-2c5b-4216-9ff7-3206ca910e68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.132566 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data\") pod \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.132693 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-combined-ca-bundle\") pod \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.132722 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data-custom\") pod \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.132754 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89k2m\" (UniqueName: \"kubernetes.io/projected/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-kube-api-access-89k2m\") pod \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.132899 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-scripts\") pod \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.132935 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-logs\") pod \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.132998 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-etc-machine-id\") pod \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\" (UID: \"e2a7a0ff-4adc-4762-9651-7ec2bffbd932\") " Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.133455 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1f5b0-2c5b-4216-9ff7-3206ca910e68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.133502 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e2a7a0ff-4adc-4762-9651-7ec2bffbd932" (UID: "e2a7a0ff-4adc-4762-9651-7ec2bffbd932"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.134065 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-logs" (OuterVolumeSpecName: "logs") pod "e2a7a0ff-4adc-4762-9651-7ec2bffbd932" (UID: "e2a7a0ff-4adc-4762-9651-7ec2bffbd932"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.137726 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2a7a0ff-4adc-4762-9651-7ec2bffbd932" (UID: "e2a7a0ff-4adc-4762-9651-7ec2bffbd932"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.138456 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-scripts" (OuterVolumeSpecName: "scripts") pod "e2a7a0ff-4adc-4762-9651-7ec2bffbd932" (UID: "e2a7a0ff-4adc-4762-9651-7ec2bffbd932"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.140857 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-kube-api-access-89k2m" (OuterVolumeSpecName: "kube-api-access-89k2m") pod "e2a7a0ff-4adc-4762-9651-7ec2bffbd932" (UID: "e2a7a0ff-4adc-4762-9651-7ec2bffbd932"). InnerVolumeSpecName "kube-api-access-89k2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.171849 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a7a0ff-4adc-4762-9651-7ec2bffbd932" (UID: "e2a7a0ff-4adc-4762-9651-7ec2bffbd932"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.195738 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data" (OuterVolumeSpecName: "config-data") pod "e2a7a0ff-4adc-4762-9651-7ec2bffbd932" (UID: "e2a7a0ff-4adc-4762-9651-7ec2bffbd932"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.236482 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.236520 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.236533 4853 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.236543 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.236554 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.236564 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.236575 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89k2m\" (UniqueName: \"kubernetes.io/projected/e2a7a0ff-4adc-4762-9651-7ec2bffbd932-kube-api-access-89k2m\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.446099 4853 generic.go:334] "Generic (PLEG): container finished" podID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerID="2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0" exitCode=0 Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.446149 4853 generic.go:334] "Generic (PLEG): container finished" podID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerID="e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b" exitCode=143 Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.446174 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.446201 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a7a0ff-4adc-4762-9651-7ec2bffbd932","Type":"ContainerDied","Data":"2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0"} Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.446270 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a7a0ff-4adc-4762-9651-7ec2bffbd932","Type":"ContainerDied","Data":"e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b"} Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.446283 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a7a0ff-4adc-4762-9651-7ec2bffbd932","Type":"ContainerDied","Data":"9112cf6d62350e5faf18fb3606a6d795c926a32b4544634732bb07a35e0c9480"} Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.446339 4853 scope.go:117] "RemoveContainer" containerID="2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.452003 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d67cbc658-l5567" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.452684 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67cbc658-l5567" event={"ID":"08b1f5b0-2c5b-4216-9ff7-3206ca910e68","Type":"ContainerDied","Data":"a377f2a871a2d6313d027870eaf1abd08530f3114dd2c9b0a5de892859246c68"} Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.501422 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d67cbc658-l5567"] Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.508341 4853 scope.go:117] "RemoveContainer" containerID="e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.527111 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d67cbc658-l5567"] Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.539772 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.547988 4853 scope.go:117] "RemoveContainer" containerID="2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0" Dec 09 17:21:28 crc kubenswrapper[4853]: E1209 17:21:28.552001 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0\": container with ID starting with 2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0 not found: ID does not exist" containerID="2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.552142 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0"} err="failed to get container status \"2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0\": rpc error: code = NotFound desc = could not find container \"2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0\": container with ID starting with 2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0 not found: ID does not exist" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.552222 4853 scope.go:117] "RemoveContainer" containerID="e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.555670 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 09 17:21:28 crc kubenswrapper[4853]: E1209 17:21:28.555947 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b\": container with ID starting with e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b not found: ID does not exist" containerID="e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.556038 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b"} err="failed to get container status \"e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b\": rpc error: code = NotFound desc = could not find container \"e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b\": container with ID starting with e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b not found: ID does not exist" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.556128 4853 scope.go:117] "RemoveContainer" containerID="2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.556780 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0"} err="failed to get container status \"2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0\": rpc error: code = NotFound desc = could not find container \"2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0\": container with ID starting with 2c1dd74c73dc1b6d105d7d01bf012f62637b150e4e56f572db73fcfdb00bb6f0 not found: ID does not exist" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.556878 4853 scope.go:117] "RemoveContainer" containerID="e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.558530 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b"} err="failed to get container status \"e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b\": rpc error: code = NotFound desc = could not find container \"e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b\": container with ID starting with e1af9fc005a4fce2eb4a3dc348ed231195b32edb05af380227e8afcccbfeb46b not found: ID does not exist" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.558628 4853 scope.go:117] "RemoveContainer" containerID="fa6da9c62b193ef3e9c1b4ff9e84e21a5ac99f147b9619cc8fd7ff565afd075d" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.564651 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 17:21:28 crc kubenswrapper[4853]: E1209 17:21:28.565224 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api-log" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.565299 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api-log" Dec 09 17:21:28 crc kubenswrapper[4853]: E1209 17:21:28.565382 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2255858c-cd09-4ef5-b023-195188f6f4d8" containerName="dnsmasq-dns" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.565434 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2255858c-cd09-4ef5-b023-195188f6f4d8" containerName="dnsmasq-dns" Dec 09 17:21:28 crc kubenswrapper[4853]: E1209 17:21:28.565495 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.565545 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api" Dec 09 17:21:28 crc kubenswrapper[4853]: E1209 17:21:28.565617 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2255858c-cd09-4ef5-b023-195188f6f4d8" containerName="init" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.565670 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2255858c-cd09-4ef5-b023-195188f6f4d8" containerName="init" Dec 09 17:21:28 crc kubenswrapper[4853]: E1209 17:21:28.565730 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerName="cinder-api-log" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.565781 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerName="cinder-api-log" Dec 09 17:21:28 crc kubenswrapper[4853]: E1209 17:21:28.565842 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerName="cinder-api" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.565918 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerName="cinder-api" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.566187 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerName="cinder-api-log" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.566285 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api-log" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.566371 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2255858c-cd09-4ef5-b023-195188f6f4d8" containerName="dnsmasq-dns" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.566452 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" containerName="cinder-api" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.566538 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" containerName="barbican-api" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.567796 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.571011 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.571233 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.571826 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.573576 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.820628 4853 scope.go:117] "RemoveContainer" containerID="6c2852c4c514547a84e0feb7be541614703115c92d9c083c482658dd689dde38" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.862911 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-config-data-custom\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.862999 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-scripts\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.863046 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49r4v\" (UniqueName: \"kubernetes.io/projected/23f2bd57-bac0-42fd-8203-0fd8f3720109-kube-api-access-49r4v\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.863078 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-public-tls-certs\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.863103 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.863125 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23f2bd57-bac0-42fd-8203-0fd8f3720109-etc-machine-id\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.863142 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23f2bd57-bac0-42fd-8203-0fd8f3720109-logs\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.863204 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.863231 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-config-data\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.964817 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-config-data-custom\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.964917 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-scripts\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.965003 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49r4v\" (UniqueName: \"kubernetes.io/projected/23f2bd57-bac0-42fd-8203-0fd8f3720109-kube-api-access-49r4v\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.965025 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-public-tls-certs\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.965059 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.965093 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23f2bd57-bac0-42fd-8203-0fd8f3720109-etc-machine-id\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.965118 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23f2bd57-bac0-42fd-8203-0fd8f3720109-logs\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.965189 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.965228 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-config-data\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.969917 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23f2bd57-bac0-42fd-8203-0fd8f3720109-etc-machine-id\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.973338 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-public-tls-certs\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.975473 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23f2bd57-bac0-42fd-8203-0fd8f3720109-logs\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.983243 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-scripts\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.988194 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.989803 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-config-data-custom\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.990256 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:28 crc kubenswrapper[4853]: I1209 17:21:28.991018 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f2bd57-bac0-42fd-8203-0fd8f3720109-config-data\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.006396 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49r4v\" (UniqueName: \"kubernetes.io/projected/23f2bd57-bac0-42fd-8203-0fd8f3720109-kube-api-access-49r4v\") pod \"cinder-api-0\" (UID: \"23f2bd57-bac0-42fd-8203-0fd8f3720109\") " pod="openstack/cinder-api-0" Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.081448 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 17:21:29 crc kubenswrapper[4853]: E1209 17:21:29.410309 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a04a5a0_6547_4374_bc53_7b0ae86adf2b.slice/crio-3f32fc6c70afbf4e6f684b901c7682c1de73579e865c7ed2a6ff74901c708bf2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cb5f280_f2fd_425a_adc9_58ef46c2afa1.slice/crio-d997511fef1a2c72ccc4c40afea69980def8c109dd15e0ba31967fbbc8c3804c.scope\": RecentStats: unable to find data in memory cache]" Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.471311 4853 generic.go:334] "Generic (PLEG): container finished" podID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerID="d997511fef1a2c72ccc4c40afea69980def8c109dd15e0ba31967fbbc8c3804c" exitCode=0 Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.471390 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cb5f280-f2fd-425a-adc9-58ef46c2afa1","Type":"ContainerDied","Data":"d997511fef1a2c72ccc4c40afea69980def8c109dd15e0ba31967fbbc8c3804c"} Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.474865 4853 generic.go:334] "Generic (PLEG): container finished" podID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerID="3f32fc6c70afbf4e6f684b901c7682c1de73579e865c7ed2a6ff74901c708bf2" exitCode=0 Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.474917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684749cc6-45h7r" event={"ID":"3a04a5a0-6547-4374-bc53-7b0ae86adf2b","Type":"ContainerDied","Data":"3f32fc6c70afbf4e6f684b901c7682c1de73579e865c7ed2a6ff74901c708bf2"} Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.584618 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b1f5b0-2c5b-4216-9ff7-3206ca910e68" path="/var/lib/kubelet/pods/08b1f5b0-2c5b-4216-9ff7-3206ca910e68/volumes" Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.586129 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a7a0ff-4adc-4762-9651-7ec2bffbd932" path="/var/lib/kubelet/pods/e2a7a0ff-4adc-4762-9651-7ec2bffbd932/volumes" Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.882770 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.974623 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:21:29 crc kubenswrapper[4853]: I1209 17:21:29.987848 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000481 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26mrf\" (UniqueName: \"kubernetes.io/projected/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-kube-api-access-26mrf\") pod \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000549 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-combined-ca-bundle\") pod \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000585 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgwbj\" (UniqueName: \"kubernetes.io/projected/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-kube-api-access-rgwbj\") pod \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000628 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-httpd-config\") pod \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000666 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-run-httpd\") pod \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000739 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-config\") pod \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000765 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-scripts\") pod \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000786 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-config-data\") pod \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000850 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-ovndb-tls-certs\") pod \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\" (UID: \"3a04a5a0-6547-4374-bc53-7b0ae86adf2b\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.000881 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-sg-core-conf-yaml\") pod \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.001640 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9cb5f280-f2fd-425a-adc9-58ef46c2afa1" (UID: "9cb5f280-f2fd-425a-adc9-58ef46c2afa1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.002473 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-combined-ca-bundle\") pod \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.002629 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-log-httpd\") pod \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\" (UID: \"9cb5f280-f2fd-425a-adc9-58ef46c2afa1\") " Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.004054 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.004589 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9cb5f280-f2fd-425a-adc9-58ef46c2afa1" (UID: "9cb5f280-f2fd-425a-adc9-58ef46c2afa1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.011637 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3a04a5a0-6547-4374-bc53-7b0ae86adf2b" (UID: "3a04a5a0-6547-4374-bc53-7b0ae86adf2b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.012006 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-kube-api-access-26mrf" (OuterVolumeSpecName: "kube-api-access-26mrf") pod "3a04a5a0-6547-4374-bc53-7b0ae86adf2b" (UID: "3a04a5a0-6547-4374-bc53-7b0ae86adf2b"). InnerVolumeSpecName "kube-api-access-26mrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.027468 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-kube-api-access-rgwbj" (OuterVolumeSpecName: "kube-api-access-rgwbj") pod "9cb5f280-f2fd-425a-adc9-58ef46c2afa1" (UID: "9cb5f280-f2fd-425a-adc9-58ef46c2afa1"). InnerVolumeSpecName "kube-api-access-rgwbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.027751 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-scripts" (OuterVolumeSpecName: "scripts") pod "9cb5f280-f2fd-425a-adc9-58ef46c2afa1" (UID: "9cb5f280-f2fd-425a-adc9-58ef46c2afa1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.071061 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9cb5f280-f2fd-425a-adc9-58ef46c2afa1" (UID: "9cb5f280-f2fd-425a-adc9-58ef46c2afa1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.097670 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-config" (OuterVolumeSpecName: "config") pod "3a04a5a0-6547-4374-bc53-7b0ae86adf2b" (UID: "3a04a5a0-6547-4374-bc53-7b0ae86adf2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.103271 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a04a5a0-6547-4374-bc53-7b0ae86adf2b" (UID: "3a04a5a0-6547-4374-bc53-7b0ae86adf2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.105474 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.105582 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.106059 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.106162 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26mrf\" (UniqueName: \"kubernetes.io/projected/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-kube-api-access-26mrf\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.106248 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.106342 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgwbj\" (UniqueName: \"kubernetes.io/projected/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-kube-api-access-rgwbj\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.107031 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.107130 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.113050 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cb5f280-f2fd-425a-adc9-58ef46c2afa1" (UID: "9cb5f280-f2fd-425a-adc9-58ef46c2afa1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.167451 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-config-data" (OuterVolumeSpecName: "config-data") pod "9cb5f280-f2fd-425a-adc9-58ef46c2afa1" (UID: "9cb5f280-f2fd-425a-adc9-58ef46c2afa1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.169479 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3a04a5a0-6547-4374-bc53-7b0ae86adf2b" (UID: "3a04a5a0-6547-4374-bc53-7b0ae86adf2b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.208646 4853 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a04a5a0-6547-4374-bc53-7b0ae86adf2b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.208691 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.208701 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cb5f280-f2fd-425a-adc9-58ef46c2afa1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.489634 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"23f2bd57-bac0-42fd-8203-0fd8f3720109","Type":"ContainerStarted","Data":"ce53d78e741b664d1ffee8b6ca312843135094aa635e935162ff44f18a693c6c"} Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.490042 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"23f2bd57-bac0-42fd-8203-0fd8f3720109","Type":"ContainerStarted","Data":"a552fb1e1f314f66c9a69f407cb622a4ea652cf2a77147ffd0cad77bf5ed049a"} Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.491831 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684749cc6-45h7r" event={"ID":"3a04a5a0-6547-4374-bc53-7b0ae86adf2b","Type":"ContainerDied","Data":"47388fcf25901e059d38c535c00de2ec98e4a2d1698c56b6337ba28293c79fad"} Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.491983 4853 scope.go:117] "RemoveContainer" containerID="2cdbe0b9fbd8f63f92598fac2e6d85452e540640ffdb246be4ba33eeefa48181" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.491856 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684749cc6-45h7r" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.515465 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cb5f280-f2fd-425a-adc9-58ef46c2afa1","Type":"ContainerDied","Data":"2572d101d0427de44327dcb42b8bdeb8889c767774a0e3db20cfbf352219e8a0"} Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.515584 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.676489 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6684749cc6-45h7r"] Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.702267 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6684749cc6-45h7r"] Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.751583 4853 scope.go:117] "RemoveContainer" containerID="3f32fc6c70afbf4e6f684b901c7682c1de73579e865c7ed2a6ff74901c708bf2" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.764453 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.787656 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.801787 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:30 crc kubenswrapper[4853]: E1209 17:21:30.802337 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerName="neutron-httpd" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802364 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerName="neutron-httpd" Dec 09 17:21:30 crc kubenswrapper[4853]: E1209 17:21:30.802409 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="proxy-httpd" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802417 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="proxy-httpd" Dec 09 17:21:30 crc kubenswrapper[4853]: E1209 17:21:30.802428 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="sg-core" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802436 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="sg-core" Dec 09 17:21:30 crc kubenswrapper[4853]: E1209 17:21:30.802458 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="ceilometer-notification-agent" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802464 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="ceilometer-notification-agent" Dec 09 17:21:30 crc kubenswrapper[4853]: E1209 17:21:30.802476 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerName="neutron-api" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802481 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerName="neutron-api" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802723 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="ceilometer-notification-agent" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802748 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerName="neutron-httpd" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802764 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" containerName="neutron-api" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802785 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="proxy-httpd" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.802795 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" containerName="sg-core" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.805075 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.811066 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.811284 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.812536 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.822557 4853 scope.go:117] "RemoveContainer" containerID="ec43bf599b4e2592b370c26ee44c2227516e4cf45c0f5357f07b3cab7764b221" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.854993 4853 scope.go:117] "RemoveContainer" containerID="6602855f94674d4315f02fc01b237ef16da1b2cc3afd081ba4478a3e7deab365" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.902290 4853 scope.go:117] "RemoveContainer" containerID="d997511fef1a2c72ccc4c40afea69980def8c109dd15e0ba31967fbbc8c3804c" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.923792 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.926238 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.926317 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-scripts\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.926380 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-log-httpd\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.926406 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-run-httpd\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.926517 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.926796 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrc5s\" (UniqueName: \"kubernetes.io/projected/7ae34d76-6148-4f1c-9bc4-5bc514426146-kube-api-access-nrc5s\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.926838 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-config-data\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:30 crc kubenswrapper[4853]: I1209 17:21:30.933890 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.008464 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-99tqh"] Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.008755 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" podUID="d2bb9bb7-97a8-42c8-b690-5e697af56654" containerName="dnsmasq-dns" containerID="cri-o://39b40ecdfa5cbf99a4947871eed47955adca53cabd600ea5f42470db4f4db90d" gracePeriod=10 Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.028886 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-log-httpd\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.028946 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-run-httpd\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.029032 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.029156 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrc5s\" (UniqueName: \"kubernetes.io/projected/7ae34d76-6148-4f1c-9bc4-5bc514426146-kube-api-access-nrc5s\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.029197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-config-data\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.029349 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.029492 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-scripts\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.029803 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-run-httpd\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.029983 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-log-httpd\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.042536 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-scripts\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.044536 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-config-data\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.046870 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.047791 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.049475 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrc5s\" (UniqueName: \"kubernetes.io/projected/7ae34d76-6148-4f1c-9bc4-5bc514426146-kube-api-access-nrc5s\") pod \"ceilometer-0\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.145318 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.530956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"23f2bd57-bac0-42fd-8203-0fd8f3720109","Type":"ContainerStarted","Data":"d757bc4841edc67c462e1994386d28ee17b6ffb710a6115cce6e644bfc98b3fe"} Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.532966 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.552281 4853 generic.go:334] "Generic (PLEG): container finished" podID="d2bb9bb7-97a8-42c8-b690-5e697af56654" containerID="39b40ecdfa5cbf99a4947871eed47955adca53cabd600ea5f42470db4f4db90d" exitCode=0 Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.552326 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" event={"ID":"d2bb9bb7-97a8-42c8-b690-5e697af56654","Type":"ContainerDied","Data":"39b40ecdfa5cbf99a4947871eed47955adca53cabd600ea5f42470db4f4db90d"} Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.552355 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" event={"ID":"d2bb9bb7-97a8-42c8-b690-5e697af56654","Type":"ContainerDied","Data":"1f335cbaa99e2f78a051975468c943e27344796ae4650dc58f4edfbb3c227615"} Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.552366 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f335cbaa99e2f78a051975468c943e27344796ae4650dc58f4edfbb3c227615" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.575821 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.5758021810000002 podStartE2EDuration="3.575802181s" podCreationTimestamp="2025-12-09 17:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:31.558753323 +0000 UTC m=+1518.493492505" watchObservedRunningTime="2025-12-09 17:21:31.575802181 +0000 UTC m=+1518.510541353" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.585211 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a04a5a0-6547-4374-bc53-7b0ae86adf2b" path="/var/lib/kubelet/pods/3a04a5a0-6547-4374-bc53-7b0ae86adf2b/volumes" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.585922 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb5f280-f2fd-425a-adc9-58ef46c2afa1" path="/var/lib/kubelet/pods/9cb5f280-f2fd-425a-adc9-58ef46c2afa1/volumes" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.600585 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.711791 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.753892 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-nb\") pod \"d2bb9bb7-97a8-42c8-b690-5e697af56654\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.753952 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-sb\") pod \"d2bb9bb7-97a8-42c8-b690-5e697af56654\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.753977 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-svc\") pod \"d2bb9bb7-97a8-42c8-b690-5e697af56654\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.754114 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-config\") pod \"d2bb9bb7-97a8-42c8-b690-5e697af56654\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.754228 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/d2bb9bb7-97a8-42c8-b690-5e697af56654-kube-api-access-76fj8\") pod \"d2bb9bb7-97a8-42c8-b690-5e697af56654\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.754275 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-swift-storage-0\") pod \"d2bb9bb7-97a8-42c8-b690-5e697af56654\" (UID: \"d2bb9bb7-97a8-42c8-b690-5e697af56654\") " Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.759901 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2bb9bb7-97a8-42c8-b690-5e697af56654-kube-api-access-76fj8" (OuterVolumeSpecName: "kube-api-access-76fj8") pod "d2bb9bb7-97a8-42c8-b690-5e697af56654" (UID: "d2bb9bb7-97a8-42c8-b690-5e697af56654"). InnerVolumeSpecName "kube-api-access-76fj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.810342 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2bb9bb7-97a8-42c8-b690-5e697af56654" (UID: "d2bb9bb7-97a8-42c8-b690-5e697af56654"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.815057 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2bb9bb7-97a8-42c8-b690-5e697af56654" (UID: "d2bb9bb7-97a8-42c8-b690-5e697af56654"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.816382 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2bb9bb7-97a8-42c8-b690-5e697af56654" (UID: "d2bb9bb7-97a8-42c8-b690-5e697af56654"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.826045 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2bb9bb7-97a8-42c8-b690-5e697af56654" (UID: "d2bb9bb7-97a8-42c8-b690-5e697af56654"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.829960 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-config" (OuterVolumeSpecName: "config") pod "d2bb9bb7-97a8-42c8-b690-5e697af56654" (UID: "d2bb9bb7-97a8-42c8-b690-5e697af56654"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.859052 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76fj8\" (UniqueName: \"kubernetes.io/projected/d2bb9bb7-97a8-42c8-b690-5e697af56654-kube-api-access-76fj8\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.859080 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.859092 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.859100 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.859109 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:31 crc kubenswrapper[4853]: I1209 17:21:31.859118 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2bb9bb7-97a8-42c8-b690-5e697af56654-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:32 crc kubenswrapper[4853]: I1209 17:21:32.564309 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerStarted","Data":"6c2082c954bd277e246619a69c7434530c689049db84da4729babbb54d11c217"} Dec 09 17:21:32 crc kubenswrapper[4853]: I1209 17:21:32.564621 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerStarted","Data":"3ad193dcaf4fc5c9ec6fcafb63d9b5ca742616fccc02a70ec9a4f10e0bb50f31"} Dec 09 17:21:32 crc kubenswrapper[4853]: I1209 17:21:32.564541 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-99tqh" Dec 09 17:21:32 crc kubenswrapper[4853]: I1209 17:21:32.600492 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-99tqh"] Dec 09 17:21:32 crc kubenswrapper[4853]: I1209 17:21:32.613619 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-99tqh"] Dec 09 17:21:33 crc kubenswrapper[4853]: I1209 17:21:33.591159 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2bb9bb7-97a8-42c8-b690-5e697af56654" path="/var/lib/kubelet/pods/d2bb9bb7-97a8-42c8-b690-5e697af56654/volumes" Dec 09 17:21:33 crc kubenswrapper[4853]: I1209 17:21:33.592675 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerStarted","Data":"06e9bf13fdea884d87caa573ba5c8ba2a4d453ec89fbbe6a5857c734756b74fe"} Dec 09 17:21:33 crc kubenswrapper[4853]: I1209 17:21:33.817678 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-65d74d4db4-2bcdr" Dec 09 17:21:34 crc kubenswrapper[4853]: I1209 17:21:34.603331 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerStarted","Data":"5325b291962fc11bf4652c6c4ca436be9ff102f58509209b94c270d492c934e2"} Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.664451 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerStarted","Data":"d425cf9ae778a605a887e24ae880859862148eb50d0e0f5c10587577a543f616"} Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.665439 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.701360 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.39087176 podStartE2EDuration="5.701342061s" podCreationTimestamp="2025-12-09 17:21:30 +0000 UTC" firstStartedPulling="2025-12-09 17:21:31.719093551 +0000 UTC m=+1518.653832733" lastFinishedPulling="2025-12-09 17:21:35.029563852 +0000 UTC m=+1521.964303034" observedRunningTime="2025-12-09 17:21:35.691051374 +0000 UTC m=+1522.625790546" watchObservedRunningTime="2025-12-09 17:21:35.701342061 +0000 UTC m=+1522.636081243" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.892400 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 09 17:21:35 crc kubenswrapper[4853]: E1209 17:21:35.892916 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bb9bb7-97a8-42c8-b690-5e697af56654" containerName="dnsmasq-dns" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.892933 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bb9bb7-97a8-42c8-b690-5e697af56654" containerName="dnsmasq-dns" Dec 09 17:21:35 crc kubenswrapper[4853]: E1209 17:21:35.892942 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bb9bb7-97a8-42c8-b690-5e697af56654" containerName="init" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.892948 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bb9bb7-97a8-42c8-b690-5e697af56654" containerName="init" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.893192 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bb9bb7-97a8-42c8-b690-5e697af56654" containerName="dnsmasq-dns" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.894136 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.898303 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-gnvvh" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.898641 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.899261 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 09 17:21:35 crc kubenswrapper[4853]: I1209 17:21:35.904427 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.077827 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t48p9\" (UniqueName: \"kubernetes.io/projected/24e96bca-760e-4742-823e-5cb3dc9d752e-kube-api-access-t48p9\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.078677 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/24e96bca-760e-4742-823e-5cb3dc9d752e-openstack-config-secret\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.079012 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e96bca-760e-4742-823e-5cb3dc9d752e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.079048 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/24e96bca-760e-4742-823e-5cb3dc9d752e-openstack-config\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.181366 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e96bca-760e-4742-823e-5cb3dc9d752e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.181416 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/24e96bca-760e-4742-823e-5cb3dc9d752e-openstack-config\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.181520 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t48p9\" (UniqueName: \"kubernetes.io/projected/24e96bca-760e-4742-823e-5cb3dc9d752e-kube-api-access-t48p9\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.181562 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/24e96bca-760e-4742-823e-5cb3dc9d752e-openstack-config-secret\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.183667 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/24e96bca-760e-4742-823e-5cb3dc9d752e-openstack-config\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.187118 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e96bca-760e-4742-823e-5cb3dc9d752e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.190136 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/24e96bca-760e-4742-823e-5cb3dc9d752e-openstack-config-secret\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.208197 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t48p9\" (UniqueName: \"kubernetes.io/projected/24e96bca-760e-4742-823e-5cb3dc9d752e-kube-api-access-t48p9\") pod \"openstackclient\" (UID: \"24e96bca-760e-4742-823e-5cb3dc9d752e\") " pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.211984 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.242997 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.298429 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.644396 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.674842 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a548ed8e-f015-4c35-817a-a00733948bd6" containerName="cinder-scheduler" containerID="cri-o://fd599f71251b72ef4f524c4a1b3f03d2d92dc51e92dc64f012e31f4044e852ae" gracePeriod=30 Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.674886 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a548ed8e-f015-4c35-817a-a00733948bd6" containerName="probe" containerID="cri-o://6b66928a44e6cbaf9bb3deaad1ac18ba04be19f9a173e89cdc4d0c116cde25e0" gracePeriod=30 Dec 09 17:21:36 crc kubenswrapper[4853]: I1209 17:21:36.808252 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 17:21:37 crc kubenswrapper[4853]: I1209 17:21:37.651086 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5496596656-4sjvd" Dec 09 17:21:37 crc kubenswrapper[4853]: I1209 17:21:37.705428 4853 generic.go:334] "Generic (PLEG): container finished" podID="a548ed8e-f015-4c35-817a-a00733948bd6" containerID="6b66928a44e6cbaf9bb3deaad1ac18ba04be19f9a173e89cdc4d0c116cde25e0" exitCode=0 Dec 09 17:21:37 crc kubenswrapper[4853]: I1209 17:21:37.705501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a548ed8e-f015-4c35-817a-a00733948bd6","Type":"ContainerDied","Data":"6b66928a44e6cbaf9bb3deaad1ac18ba04be19f9a173e89cdc4d0c116cde25e0"} Dec 09 17:21:37 crc kubenswrapper[4853]: I1209 17:21:37.707571 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"24e96bca-760e-4742-823e-5cb3dc9d752e","Type":"ContainerStarted","Data":"2a84d86bb7bb1ea5f3091eaa8d7d306e0cdd088176a70fd11720df0f483ee6bf"} Dec 09 17:21:38 crc kubenswrapper[4853]: I1209 17:21:38.723966 4853 generic.go:334] "Generic (PLEG): container finished" podID="a548ed8e-f015-4c35-817a-a00733948bd6" containerID="fd599f71251b72ef4f524c4a1b3f03d2d92dc51e92dc64f012e31f4044e852ae" exitCode=0 Dec 09 17:21:38 crc kubenswrapper[4853]: I1209 17:21:38.724056 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a548ed8e-f015-4c35-817a-a00733948bd6","Type":"ContainerDied","Data":"fd599f71251b72ef4f524c4a1b3f03d2d92dc51e92dc64f012e31f4044e852ae"} Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.707284 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.748176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a548ed8e-f015-4c35-817a-a00733948bd6","Type":"ContainerDied","Data":"c24f74850e06739f75ca5d96481d9622551278f9004fb4a2d5f93244780638b4"} Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.748254 4853 scope.go:117] "RemoveContainer" containerID="6b66928a44e6cbaf9bb3deaad1ac18ba04be19f9a173e89cdc4d0c116cde25e0" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.748440 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.783512 4853 scope.go:117] "RemoveContainer" containerID="fd599f71251b72ef4f524c4a1b3f03d2d92dc51e92dc64f012e31f4044e852ae" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.815247 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-combined-ca-bundle\") pod \"a548ed8e-f015-4c35-817a-a00733948bd6\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.815337 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data-custom\") pod \"a548ed8e-f015-4c35-817a-a00733948bd6\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.815514 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data\") pod \"a548ed8e-f015-4c35-817a-a00733948bd6\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.815578 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54w5x\" (UniqueName: \"kubernetes.io/projected/a548ed8e-f015-4c35-817a-a00733948bd6-kube-api-access-54w5x\") pod \"a548ed8e-f015-4c35-817a-a00733948bd6\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.816339 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-scripts\") pod \"a548ed8e-f015-4c35-817a-a00733948bd6\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.816441 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a548ed8e-f015-4c35-817a-a00733948bd6-etc-machine-id\") pod \"a548ed8e-f015-4c35-817a-a00733948bd6\" (UID: \"a548ed8e-f015-4c35-817a-a00733948bd6\") " Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.816932 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a548ed8e-f015-4c35-817a-a00733948bd6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a548ed8e-f015-4c35-817a-a00733948bd6" (UID: "a548ed8e-f015-4c35-817a-a00733948bd6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.832295 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a548ed8e-f015-4c35-817a-a00733948bd6" (UID: "a548ed8e-f015-4c35-817a-a00733948bd6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.840745 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a548ed8e-f015-4c35-817a-a00733948bd6-kube-api-access-54w5x" (OuterVolumeSpecName: "kube-api-access-54w5x") pod "a548ed8e-f015-4c35-817a-a00733948bd6" (UID: "a548ed8e-f015-4c35-817a-a00733948bd6"). InnerVolumeSpecName "kube-api-access-54w5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.854020 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-scripts" (OuterVolumeSpecName: "scripts") pod "a548ed8e-f015-4c35-817a-a00733948bd6" (UID: "a548ed8e-f015-4c35-817a-a00733948bd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.904708 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a548ed8e-f015-4c35-817a-a00733948bd6" (UID: "a548ed8e-f015-4c35-817a-a00733948bd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.919588 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54w5x\" (UniqueName: \"kubernetes.io/projected/a548ed8e-f015-4c35-817a-a00733948bd6-kube-api-access-54w5x\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.919641 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.919654 4853 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a548ed8e-f015-4c35-817a-a00733948bd6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.919666 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:39 crc kubenswrapper[4853]: I1209 17:21:39.919677 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.070768 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data" (OuterVolumeSpecName: "config-data") pod "a548ed8e-f015-4c35-817a-a00733948bd6" (UID: "a548ed8e-f015-4c35-817a-a00733948bd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.125192 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a548ed8e-f015-4c35-817a-a00733948bd6-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.394647 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.409675 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.423356 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 17:21:40 crc kubenswrapper[4853]: E1209 17:21:40.428503 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a548ed8e-f015-4c35-817a-a00733948bd6" containerName="cinder-scheduler" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.428539 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a548ed8e-f015-4c35-817a-a00733948bd6" containerName="cinder-scheduler" Dec 09 17:21:40 crc kubenswrapper[4853]: E1209 17:21:40.428564 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a548ed8e-f015-4c35-817a-a00733948bd6" containerName="probe" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.428571 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a548ed8e-f015-4c35-817a-a00733948bd6" containerName="probe" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.428823 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a548ed8e-f015-4c35-817a-a00733948bd6" containerName="cinder-scheduler" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.428849 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a548ed8e-f015-4c35-817a-a00733948bd6" containerName="probe" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.430052 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.441799 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.445560 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.533818 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-scripts\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.533940 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.533989 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.534011 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-config-data\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.534121 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55f413b4-3b77-4e15-97f8-1cedee56a118-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.534154 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxnss\" (UniqueName: \"kubernetes.io/projected/55f413b4-3b77-4e15-97f8-1cedee56a118-kube-api-access-rxnss\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.636224 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55f413b4-3b77-4e15-97f8-1cedee56a118-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.636298 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxnss\" (UniqueName: \"kubernetes.io/projected/55f413b4-3b77-4e15-97f8-1cedee56a118-kube-api-access-rxnss\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.636368 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55f413b4-3b77-4e15-97f8-1cedee56a118-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.636398 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-scripts\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.636663 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.636776 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.636807 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-config-data\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.650785 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.654282 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-config-data\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.664326 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-scripts\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.666124 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxnss\" (UniqueName: \"kubernetes.io/projected/55f413b4-3b77-4e15-97f8-1cedee56a118-kube-api-access-rxnss\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.674455 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55f413b4-3b77-4e15-97f8-1cedee56a118-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"55f413b4-3b77-4e15-97f8-1cedee56a118\") " pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.767834 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.927251 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-756c8b85d7-nmj2j"] Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.929302 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.934074 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.934281 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.934429 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.943346 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-config-data\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.943393 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b3787e-b650-484a-8fa1-5371b8e96c0e-run-httpd\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.943455 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-public-tls-certs\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.943518 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-internal-tls-certs\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.943564 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/05b3787e-b650-484a-8fa1-5371b8e96c0e-etc-swift\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.943586 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q55b7\" (UniqueName: \"kubernetes.io/projected/05b3787e-b650-484a-8fa1-5371b8e96c0e-kube-api-access-q55b7\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.943809 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b3787e-b650-484a-8fa1-5371b8e96c0e-log-httpd\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.943831 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-combined-ca-bundle\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:40 crc kubenswrapper[4853]: I1209 17:21:40.980032 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-756c8b85d7-nmj2j"] Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.048468 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/05b3787e-b650-484a-8fa1-5371b8e96c0e-etc-swift\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.048546 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q55b7\" (UniqueName: \"kubernetes.io/projected/05b3787e-b650-484a-8fa1-5371b8e96c0e-kube-api-access-q55b7\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.048765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b3787e-b650-484a-8fa1-5371b8e96c0e-log-httpd\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.048797 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-combined-ca-bundle\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.048841 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-config-data\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.048864 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b3787e-b650-484a-8fa1-5371b8e96c0e-run-httpd\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.048944 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-public-tls-certs\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.049028 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-internal-tls-certs\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.049675 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b3787e-b650-484a-8fa1-5371b8e96c0e-run-httpd\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.050927 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05b3787e-b650-484a-8fa1-5371b8e96c0e-log-httpd\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.058408 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-combined-ca-bundle\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.058538 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-internal-tls-certs\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.058950 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-public-tls-certs\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.059774 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b3787e-b650-484a-8fa1-5371b8e96c0e-config-data\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.060253 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/05b3787e-b650-484a-8fa1-5371b8e96c0e-etc-swift\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.082551 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q55b7\" (UniqueName: \"kubernetes.io/projected/05b3787e-b650-484a-8fa1-5371b8e96c0e-kube-api-access-q55b7\") pod \"swift-proxy-756c8b85d7-nmj2j\" (UID: \"05b3787e-b650-484a-8fa1-5371b8e96c0e\") " pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.255573 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.390019 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 17:21:41 crc kubenswrapper[4853]: W1209 17:21:41.415793 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55f413b4_3b77_4e15_97f8_1cedee56a118.slice/crio-3de546e0c569a811f12b19ac6bf83ce8cfdcec880c521b37ebb4f963c8540eee WatchSource:0}: Error finding container 3de546e0c569a811f12b19ac6bf83ce8cfdcec880c521b37ebb4f963c8540eee: Status 404 returned error can't find the container with id 3de546e0c569a811f12b19ac6bf83ce8cfdcec880c521b37ebb4f963c8540eee Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.614277 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a548ed8e-f015-4c35-817a-a00733948bd6" path="/var/lib/kubelet/pods/a548ed8e-f015-4c35-817a-a00733948bd6/volumes" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.651667 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.794784 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"55f413b4-3b77-4e15-97f8-1cedee56a118","Type":"ContainerStarted","Data":"3de546e0c569a811f12b19ac6bf83ce8cfdcec880c521b37ebb4f963c8540eee"} Dec 09 17:21:41 crc kubenswrapper[4853]: I1209 17:21:41.887709 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-756c8b85d7-nmj2j"] Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.031897 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.032130 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-log" containerID="cri-o://2e3919b681bf24a210f482fd4673a0d4a5676c928d9e0a4b044236f9d7e38902" gracePeriod=30 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.033115 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-httpd" containerID="cri-o://eb142153196e9c94ef8c428d32e9f7e12e3306e4cb66edb674ccc432893c8c3d" gracePeriod=30 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.146344 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.146895 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="ceilometer-central-agent" containerID="cri-o://6c2082c954bd277e246619a69c7434530c689049db84da4729babbb54d11c217" gracePeriod=30 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.147349 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="proxy-httpd" containerID="cri-o://d425cf9ae778a605a887e24ae880859862148eb50d0e0f5c10587577a543f616" gracePeriod=30 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.147392 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="sg-core" containerID="cri-o://5325b291962fc11bf4652c6c4ca436be9ff102f58509209b94c270d492c934e2" gracePeriod=30 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.147423 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="ceilometer-notification-agent" containerID="cri-o://06e9bf13fdea884d87caa573ba5c8ba2a4d453ec89fbbe6a5857c734756b74fe" gracePeriod=30 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.827248 4853 generic.go:334] "Generic (PLEG): container finished" podID="9add26f6-37de-4ffc-9426-fab101257314" containerID="2e3919b681bf24a210f482fd4673a0d4a5676c928d9e0a4b044236f9d7e38902" exitCode=143 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.827559 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9add26f6-37de-4ffc-9426-fab101257314","Type":"ContainerDied","Data":"2e3919b681bf24a210f482fd4673a0d4a5676c928d9e0a4b044236f9d7e38902"} Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.830669 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-756c8b85d7-nmj2j" event={"ID":"05b3787e-b650-484a-8fa1-5371b8e96c0e","Type":"ContainerStarted","Data":"c87bc866e7084022cc2960f21e5ecee885a17b5dd205051a38af67f880802f7c"} Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.830718 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-756c8b85d7-nmj2j" event={"ID":"05b3787e-b650-484a-8fa1-5371b8e96c0e","Type":"ContainerStarted","Data":"27f637cc45482dba80d93ccfe01bc7dfa3574d5756940c7eaa9b51e48f8ebd62"} Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.830728 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-756c8b85d7-nmj2j" event={"ID":"05b3787e-b650-484a-8fa1-5371b8e96c0e","Type":"ContainerStarted","Data":"79bc2525a476fb2cd87637c315e33ed922fdd077ad238fa5339d134e8621ed30"} Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.831698 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.874574 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"55f413b4-3b77-4e15-97f8-1cedee56a118","Type":"ContainerStarted","Data":"7e30f3df0e3fb133c2567b314ad6d70bfffc004f8051e35f5754b5995f7df810"} Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.893738 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-756c8b85d7-nmj2j" podStartSLOduration=2.8936999549999998 podStartE2EDuration="2.893699955s" podCreationTimestamp="2025-12-09 17:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:42.863562781 +0000 UTC m=+1529.798301983" watchObservedRunningTime="2025-12-09 17:21:42.893699955 +0000 UTC m=+1529.828439147" Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.929229 4853 generic.go:334] "Generic (PLEG): container finished" podID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerID="d425cf9ae778a605a887e24ae880859862148eb50d0e0f5c10587577a543f616" exitCode=0 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.929269 4853 generic.go:334] "Generic (PLEG): container finished" podID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerID="5325b291962fc11bf4652c6c4ca436be9ff102f58509209b94c270d492c934e2" exitCode=2 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.929277 4853 generic.go:334] "Generic (PLEG): container finished" podID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerID="06e9bf13fdea884d87caa573ba5c8ba2a4d453ec89fbbe6a5857c734756b74fe" exitCode=0 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.929288 4853 generic.go:334] "Generic (PLEG): container finished" podID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerID="6c2082c954bd277e246619a69c7434530c689049db84da4729babbb54d11c217" exitCode=0 Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.929311 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerDied","Data":"d425cf9ae778a605a887e24ae880859862148eb50d0e0f5c10587577a543f616"} Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.929338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerDied","Data":"5325b291962fc11bf4652c6c4ca436be9ff102f58509209b94c270d492c934e2"} Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.929347 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerDied","Data":"06e9bf13fdea884d87caa573ba5c8ba2a4d453ec89fbbe6a5857c734756b74fe"} Dec 09 17:21:42 crc kubenswrapper[4853]: I1209 17:21:42.929357 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerDied","Data":"6c2082c954bd277e246619a69c7434530c689049db84da4729babbb54d11c217"} Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.187567 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.188239 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerName="glance-log" containerID="cri-o://e3c040a04b35d4f21410f73ef47c654dc2bc735c0618450d4640188fdcbaf697" gracePeriod=30 Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.188474 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerName="glance-httpd" containerID="cri-o://e6b74223f78c0e0010bc7c951980521974c85cbbe6a804780cd558381f49be16" gracePeriod=30 Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.449621 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.584714 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-sg-core-conf-yaml\") pod \"7ae34d76-6148-4f1c-9bc4-5bc514426146\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.584828 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrc5s\" (UniqueName: \"kubernetes.io/projected/7ae34d76-6148-4f1c-9bc4-5bc514426146-kube-api-access-nrc5s\") pod \"7ae34d76-6148-4f1c-9bc4-5bc514426146\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.584894 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-config-data\") pod \"7ae34d76-6148-4f1c-9bc4-5bc514426146\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.584961 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-scripts\") pod \"7ae34d76-6148-4f1c-9bc4-5bc514426146\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.585046 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-log-httpd\") pod \"7ae34d76-6148-4f1c-9bc4-5bc514426146\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.585158 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-run-httpd\") pod \"7ae34d76-6148-4f1c-9bc4-5bc514426146\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.585204 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-combined-ca-bundle\") pod \"7ae34d76-6148-4f1c-9bc4-5bc514426146\" (UID: \"7ae34d76-6148-4f1c-9bc4-5bc514426146\") " Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.593619 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae34d76-6148-4f1c-9bc4-5bc514426146-kube-api-access-nrc5s" (OuterVolumeSpecName: "kube-api-access-nrc5s") pod "7ae34d76-6148-4f1c-9bc4-5bc514426146" (UID: "7ae34d76-6148-4f1c-9bc4-5bc514426146"). InnerVolumeSpecName "kube-api-access-nrc5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.593993 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7ae34d76-6148-4f1c-9bc4-5bc514426146" (UID: "7ae34d76-6148-4f1c-9bc4-5bc514426146"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.595742 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7ae34d76-6148-4f1c-9bc4-5bc514426146" (UID: "7ae34d76-6148-4f1c-9bc4-5bc514426146"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:43 crc kubenswrapper[4853]: E1209 17:21:43.599870 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26ebb774_866f_478e_8ef2_7cfbd141887b.slice/crio-conmon-e3c040a04b35d4f21410f73ef47c654dc2bc735c0618450d4640188fdcbaf697.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26ebb774_866f_478e_8ef2_7cfbd141887b.slice/crio-e3c040a04b35d4f21410f73ef47c654dc2bc735c0618450d4640188fdcbaf697.scope\": RecentStats: unable to find data in memory cache]" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.630861 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-scripts" (OuterVolumeSpecName: "scripts") pod "7ae34d76-6148-4f1c-9bc4-5bc514426146" (UID: "7ae34d76-6148-4f1c-9bc4-5bc514426146"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.635321 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7ae34d76-6148-4f1c-9bc4-5bc514426146" (UID: "7ae34d76-6148-4f1c-9bc4-5bc514426146"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.694224 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.694256 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ae34d76-6148-4f1c-9bc4-5bc514426146-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.694266 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.694277 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrc5s\" (UniqueName: \"kubernetes.io/projected/7ae34d76-6148-4f1c-9bc4-5bc514426146-kube-api-access-nrc5s\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.694286 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.730891 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ae34d76-6148-4f1c-9bc4-5bc514426146" (UID: "7ae34d76-6148-4f1c-9bc4-5bc514426146"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.796333 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.807998 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-config-data" (OuterVolumeSpecName: "config-data") pod "7ae34d76-6148-4f1c-9bc4-5bc514426146" (UID: "7ae34d76-6148-4f1c-9bc4-5bc514426146"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.902349 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae34d76-6148-4f1c-9bc4-5bc514426146-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.941019 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"55f413b4-3b77-4e15-97f8-1cedee56a118","Type":"ContainerStarted","Data":"c448ccab8d35f6605c8c91800c384fefbad7358e6ef8eeed8ef1a3f1681ebba1"} Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.954368 4853 generic.go:334] "Generic (PLEG): container finished" podID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerID="e3c040a04b35d4f21410f73ef47c654dc2bc735c0618450d4640188fdcbaf697" exitCode=143 Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.954430 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebb774-866f-478e-8ef2-7cfbd141887b","Type":"ContainerDied","Data":"e3c040a04b35d4f21410f73ef47c654dc2bc735c0618450d4640188fdcbaf697"} Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.963356 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.963362 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ae34d76-6148-4f1c-9bc4-5bc514426146","Type":"ContainerDied","Data":"3ad193dcaf4fc5c9ec6fcafb63d9b5ca742616fccc02a70ec9a4f10e0bb50f31"} Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.963421 4853 scope.go:117] "RemoveContainer" containerID="d425cf9ae778a605a887e24ae880859862148eb50d0e0f5c10587577a543f616" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.963573 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.968546 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.968524272 podStartE2EDuration="3.968524272s" podCreationTimestamp="2025-12-09 17:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:43.959685285 +0000 UTC m=+1530.894424477" watchObservedRunningTime="2025-12-09 17:21:43.968524272 +0000 UTC m=+1530.903263454" Dec 09 17:21:43 crc kubenswrapper[4853]: I1209 17:21:43.999353 4853 scope.go:117] "RemoveContainer" containerID="5325b291962fc11bf4652c6c4ca436be9ff102f58509209b94c270d492c934e2" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.009047 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.021001 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.043143 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:44 crc kubenswrapper[4853]: E1209 17:21:44.044028 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="ceilometer-central-agent" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.044056 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="ceilometer-central-agent" Dec 09 17:21:44 crc kubenswrapper[4853]: E1209 17:21:44.044093 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="proxy-httpd" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.044106 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="proxy-httpd" Dec 09 17:21:44 crc kubenswrapper[4853]: E1209 17:21:44.044126 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="ceilometer-notification-agent" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.044137 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="ceilometer-notification-agent" Dec 09 17:21:44 crc kubenswrapper[4853]: E1209 17:21:44.044172 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="sg-core" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.044179 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="sg-core" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.044457 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="ceilometer-notification-agent" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.044492 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="ceilometer-central-agent" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.044504 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="proxy-httpd" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.044524 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" containerName="sg-core" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.047372 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.050437 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.050684 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.065857 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.097525 4853 scope.go:117] "RemoveContainer" containerID="06e9bf13fdea884d87caa573ba5c8ba2a4d453ec89fbbe6a5857c734756b74fe" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.109904 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.110021 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8nks\" (UniqueName: \"kubernetes.io/projected/36a546d1-768b-4900-abe2-a2d9876b2557-kube-api-access-v8nks\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.110188 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-run-httpd\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.110223 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-config-data\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.110246 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.110287 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-log-httpd\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.110312 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-scripts\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.127619 4853 scope.go:117] "RemoveContainer" containerID="6c2082c954bd277e246619a69c7434530c689049db84da4729babbb54d11c217" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.212317 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-run-httpd\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.212396 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-config-data\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.212425 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.212488 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-log-httpd\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.212518 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-scripts\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.212563 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.212670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8nks\" (UniqueName: \"kubernetes.io/projected/36a546d1-768b-4900-abe2-a2d9876b2557-kube-api-access-v8nks\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.213422 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-run-httpd\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.214580 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-log-httpd\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.221370 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.221431 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.223197 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-scripts\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.224475 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-config-data\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.233791 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8nks\" (UniqueName: \"kubernetes.io/projected/36a546d1-768b-4900-abe2-a2d9876b2557-kube-api-access-v8nks\") pod \"ceilometer-0\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " pod="openstack/ceilometer-0" Dec 09 17:21:44 crc kubenswrapper[4853]: I1209 17:21:44.383217 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:21:45 crc kubenswrapper[4853]: I1209 17:21:45.300814 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.183:9292/healthcheck\": read tcp 10.217.0.2:45702->10.217.0.183:9292: read: connection reset by peer" Dec 09 17:21:45 crc kubenswrapper[4853]: I1209 17:21:45.300811 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.183:9292/healthcheck\": read tcp 10.217.0.2:45696->10.217.0.183:9292: read: connection reset by peer" Dec 09 17:21:45 crc kubenswrapper[4853]: I1209 17:21:45.334490 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:45 crc kubenswrapper[4853]: I1209 17:21:45.383491 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:21:45 crc kubenswrapper[4853]: I1209 17:21:45.589549 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae34d76-6148-4f1c-9bc4-5bc514426146" path="/var/lib/kubelet/pods/7ae34d76-6148-4f1c-9bc4-5bc514426146/volumes" Dec 09 17:21:45 crc kubenswrapper[4853]: I1209 17:21:45.768871 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.138053 4853 generic.go:334] "Generic (PLEG): container finished" podID="9add26f6-37de-4ffc-9426-fab101257314" containerID="eb142153196e9c94ef8c428d32e9f7e12e3306e4cb66edb674ccc432893c8c3d" exitCode=0 Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.138302 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9add26f6-37de-4ffc-9426-fab101257314","Type":"ContainerDied","Data":"eb142153196e9c94ef8c428d32e9f7e12e3306e4cb66edb674ccc432893c8c3d"} Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.147687 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerStarted","Data":"c14433b20c46b2bedd92a9120481896a5e246482118c020b16d7a5f20505be88"} Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.311549 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.381813 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9add26f6-37de-4ffc-9426-fab101257314\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.382227 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-config-data\") pod \"9add26f6-37de-4ffc-9426-fab101257314\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.382288 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-combined-ca-bundle\") pod \"9add26f6-37de-4ffc-9426-fab101257314\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.382381 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-scripts\") pod \"9add26f6-37de-4ffc-9426-fab101257314\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.382577 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-public-tls-certs\") pod \"9add26f6-37de-4ffc-9426-fab101257314\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.382675 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hbwr\" (UniqueName: \"kubernetes.io/projected/9add26f6-37de-4ffc-9426-fab101257314-kube-api-access-2hbwr\") pod \"9add26f6-37de-4ffc-9426-fab101257314\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.382730 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-httpd-run\") pod \"9add26f6-37de-4ffc-9426-fab101257314\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.382754 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-logs\") pod \"9add26f6-37de-4ffc-9426-fab101257314\" (UID: \"9add26f6-37de-4ffc-9426-fab101257314\") " Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.384442 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-logs" (OuterVolumeSpecName: "logs") pod "9add26f6-37de-4ffc-9426-fab101257314" (UID: "9add26f6-37de-4ffc-9426-fab101257314"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.388723 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9add26f6-37de-4ffc-9426-fab101257314" (UID: "9add26f6-37de-4ffc-9426-fab101257314"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.395451 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "9add26f6-37de-4ffc-9426-fab101257314" (UID: "9add26f6-37de-4ffc-9426-fab101257314"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.395640 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-scripts" (OuterVolumeSpecName: "scripts") pod "9add26f6-37de-4ffc-9426-fab101257314" (UID: "9add26f6-37de-4ffc-9426-fab101257314"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.402221 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9add26f6-37de-4ffc-9426-fab101257314-kube-api-access-2hbwr" (OuterVolumeSpecName: "kube-api-access-2hbwr") pod "9add26f6-37de-4ffc-9426-fab101257314" (UID: "9add26f6-37de-4ffc-9426-fab101257314"). InnerVolumeSpecName "kube-api-access-2hbwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.453921 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9add26f6-37de-4ffc-9426-fab101257314" (UID: "9add26f6-37de-4ffc-9426-fab101257314"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.486309 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hbwr\" (UniqueName: \"kubernetes.io/projected/9add26f6-37de-4ffc-9426-fab101257314-kube-api-access-2hbwr\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.486353 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.486363 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9add26f6-37de-4ffc-9426-fab101257314-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.486395 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.486404 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.486413 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.496822 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-config-data" (OuterVolumeSpecName: "config-data") pod "9add26f6-37de-4ffc-9426-fab101257314" (UID: "9add26f6-37de-4ffc-9426-fab101257314"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.528201 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.543896 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9add26f6-37de-4ffc-9426-fab101257314" (UID: "9add26f6-37de-4ffc-9426-fab101257314"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.590049 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.590121 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.590133 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9add26f6-37de-4ffc-9426-fab101257314-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.887985 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-v4hn8"] Dec 09 17:21:46 crc kubenswrapper[4853]: E1209 17:21:46.890105 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-log" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.890163 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-log" Dec 09 17:21:46 crc kubenswrapper[4853]: E1209 17:21:46.890190 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-httpd" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.890200 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-httpd" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.890719 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-httpd" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.890748 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9add26f6-37de-4ffc-9426-fab101257314" containerName="glance-log" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.893892 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.906812 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-v4hn8"] Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.999692 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbg4z\" (UniqueName: \"kubernetes.io/projected/24b35a27-0dc5-4c08-9505-f81db9987470-kube-api-access-xbg4z\") pod \"nova-api-db-create-v4hn8\" (UID: \"24b35a27-0dc5-4c08-9505-f81db9987470\") " pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:46 crc kubenswrapper[4853]: I1209 17:21:46.999747 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b35a27-0dc5-4c08-9505-f81db9987470-operator-scripts\") pod \"nova-api-db-create-v4hn8\" (UID: \"24b35a27-0dc5-4c08-9505-f81db9987470\") " pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.001374 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9g9jw"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.002993 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.029364 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9g9jw"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.101894 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbg4z\" (UniqueName: \"kubernetes.io/projected/24b35a27-0dc5-4c08-9505-f81db9987470-kube-api-access-xbg4z\") pod \"nova-api-db-create-v4hn8\" (UID: \"24b35a27-0dc5-4c08-9505-f81db9987470\") " pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.101937 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpvj\" (UniqueName: \"kubernetes.io/projected/f263ce7c-227e-4025-af32-6bf4176920f7-kube-api-access-zzpvj\") pod \"nova-cell0-db-create-9g9jw\" (UID: \"f263ce7c-227e-4025-af32-6bf4176920f7\") " pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.101962 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b35a27-0dc5-4c08-9505-f81db9987470-operator-scripts\") pod \"nova-api-db-create-v4hn8\" (UID: \"24b35a27-0dc5-4c08-9505-f81db9987470\") " pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.101992 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f263ce7c-227e-4025-af32-6bf4176920f7-operator-scripts\") pod \"nova-cell0-db-create-9g9jw\" (UID: \"f263ce7c-227e-4025-af32-6bf4176920f7\") " pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.103084 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b35a27-0dc5-4c08-9505-f81db9987470-operator-scripts\") pod \"nova-api-db-create-v4hn8\" (UID: \"24b35a27-0dc5-4c08-9505-f81db9987470\") " pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.109414 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-m5x9t"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.111412 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.121732 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-eb70-account-create-update-tmkjg"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.122442 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbg4z\" (UniqueName: \"kubernetes.io/projected/24b35a27-0dc5-4c08-9505-f81db9987470-kube-api-access-xbg4z\") pod \"nova-api-db-create-v4hn8\" (UID: \"24b35a27-0dc5-4c08-9505-f81db9987470\") " pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.123295 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.125185 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.143757 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m5x9t"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.160371 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-eb70-account-create-update-tmkjg"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.189209 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.189740 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9add26f6-37de-4ffc-9426-fab101257314","Type":"ContainerDied","Data":"d16dfa4a11d62844cfe141697d707749a496ed2ae9ee33b5c71f2e92bf56a3dd"} Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.189822 4853 scope.go:117] "RemoveContainer" containerID="eb142153196e9c94ef8c428d32e9f7e12e3306e4cb66edb674ccc432893c8c3d" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.198766 4853 generic.go:334] "Generic (PLEG): container finished" podID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerID="e6b74223f78c0e0010bc7c951980521974c85cbbe6a804780cd558381f49be16" exitCode=0 Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.198858 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebb774-866f-478e-8ef2-7cfbd141887b","Type":"ContainerDied","Data":"e6b74223f78c0e0010bc7c951980521974c85cbbe6a804780cd558381f49be16"} Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.198893 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"26ebb774-866f-478e-8ef2-7cfbd141887b","Type":"ContainerDied","Data":"b77fcd64f4f7b5d3c333020469582f3a1f2d30ff0af7b4af4921823f7ec50006"} Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.198914 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77fcd64f4f7b5d3c333020469582f3a1f2d30ff0af7b4af4921823f7ec50006" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.201293 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerStarted","Data":"e1d2382f7519aac7bd68e69844f197665d2db80daa7ae9a14e6504b5c854a07d"} Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.205183 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf974\" (UniqueName: \"kubernetes.io/projected/02a2dc7d-7b15-48fb-96eb-f11ff9023399-kube-api-access-rf974\") pod \"nova-cell1-db-create-m5x9t\" (UID: \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\") " pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.205228 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a2dc7d-7b15-48fb-96eb-f11ff9023399-operator-scripts\") pod \"nova-cell1-db-create-m5x9t\" (UID: \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\") " pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.205257 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4hsm\" (UniqueName: \"kubernetes.io/projected/dc14b612-092a-43a4-affb-95ff52a3e82d-kube-api-access-k4hsm\") pod \"nova-api-eb70-account-create-update-tmkjg\" (UID: \"dc14b612-092a-43a4-affb-95ff52a3e82d\") " pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.205313 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc14b612-092a-43a4-affb-95ff52a3e82d-operator-scripts\") pod \"nova-api-eb70-account-create-update-tmkjg\" (UID: \"dc14b612-092a-43a4-affb-95ff52a3e82d\") " pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.205370 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpvj\" (UniqueName: \"kubernetes.io/projected/f263ce7c-227e-4025-af32-6bf4176920f7-kube-api-access-zzpvj\") pod \"nova-cell0-db-create-9g9jw\" (UID: \"f263ce7c-227e-4025-af32-6bf4176920f7\") " pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.205425 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f263ce7c-227e-4025-af32-6bf4176920f7-operator-scripts\") pod \"nova-cell0-db-create-9g9jw\" (UID: \"f263ce7c-227e-4025-af32-6bf4176920f7\") " pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.206297 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f263ce7c-227e-4025-af32-6bf4176920f7-operator-scripts\") pod \"nova-cell0-db-create-9g9jw\" (UID: \"f263ce7c-227e-4025-af32-6bf4176920f7\") " pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.206536 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.209100 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.222140 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpvj\" (UniqueName: \"kubernetes.io/projected/f263ce7c-227e-4025-af32-6bf4176920f7-kube-api-access-zzpvj\") pod \"nova-cell0-db-create-9g9jw\" (UID: \"f263ce7c-227e-4025-af32-6bf4176920f7\") " pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.310707 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"26ebb774-866f-478e-8ef2-7cfbd141887b\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.310911 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t4wh\" (UniqueName: \"kubernetes.io/projected/26ebb774-866f-478e-8ef2-7cfbd141887b-kube-api-access-6t4wh\") pod \"26ebb774-866f-478e-8ef2-7cfbd141887b\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.311100 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-combined-ca-bundle\") pod \"26ebb774-866f-478e-8ef2-7cfbd141887b\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.311202 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-scripts\") pod \"26ebb774-866f-478e-8ef2-7cfbd141887b\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.311254 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-internal-tls-certs\") pod \"26ebb774-866f-478e-8ef2-7cfbd141887b\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.311291 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-config-data\") pod \"26ebb774-866f-478e-8ef2-7cfbd141887b\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.311415 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-httpd-run\") pod \"26ebb774-866f-478e-8ef2-7cfbd141887b\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.311505 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-logs\") pod \"26ebb774-866f-478e-8ef2-7cfbd141887b\" (UID: \"26ebb774-866f-478e-8ef2-7cfbd141887b\") " Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.312079 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf974\" (UniqueName: \"kubernetes.io/projected/02a2dc7d-7b15-48fb-96eb-f11ff9023399-kube-api-access-rf974\") pod \"nova-cell1-db-create-m5x9t\" (UID: \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\") " pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.312120 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a2dc7d-7b15-48fb-96eb-f11ff9023399-operator-scripts\") pod \"nova-cell1-db-create-m5x9t\" (UID: \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\") " pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.312148 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4hsm\" (UniqueName: \"kubernetes.io/projected/dc14b612-092a-43a4-affb-95ff52a3e82d-kube-api-access-k4hsm\") pod \"nova-api-eb70-account-create-update-tmkjg\" (UID: \"dc14b612-092a-43a4-affb-95ff52a3e82d\") " pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.312198 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc14b612-092a-43a4-affb-95ff52a3e82d-operator-scripts\") pod \"nova-api-eb70-account-create-update-tmkjg\" (UID: \"dc14b612-092a-43a4-affb-95ff52a3e82d\") " pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.313717 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "26ebb774-866f-478e-8ef2-7cfbd141887b" (UID: "26ebb774-866f-478e-8ef2-7cfbd141887b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.314690 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-logs" (OuterVolumeSpecName: "logs") pod "26ebb774-866f-478e-8ef2-7cfbd141887b" (UID: "26ebb774-866f-478e-8ef2-7cfbd141887b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.316737 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a2dc7d-7b15-48fb-96eb-f11ff9023399-operator-scripts\") pod \"nova-cell1-db-create-m5x9t\" (UID: \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\") " pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.317701 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc14b612-092a-43a4-affb-95ff52a3e82d-operator-scripts\") pod \"nova-api-eb70-account-create-update-tmkjg\" (UID: \"dc14b612-092a-43a4-affb-95ff52a3e82d\") " pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.335812 4853 scope.go:117] "RemoveContainer" containerID="2e3919b681bf24a210f482fd4673a0d4a5676c928d9e0a4b044236f9d7e38902" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.337054 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-scripts" (OuterVolumeSpecName: "scripts") pod "26ebb774-866f-478e-8ef2-7cfbd141887b" (UID: "26ebb774-866f-478e-8ef2-7cfbd141887b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.337248 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "26ebb774-866f-478e-8ef2-7cfbd141887b" (UID: "26ebb774-866f-478e-8ef2-7cfbd141887b"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.337993 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9831-account-create-update-t5gr7"] Dec 09 17:21:47 crc kubenswrapper[4853]: E1209 17:21:47.340167 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerName="glance-log" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.368026 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerName="glance-log" Dec 09 17:21:47 crc kubenswrapper[4853]: E1209 17:21:47.368519 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerName="glance-httpd" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.368529 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerName="glance-httpd" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.369357 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerName="glance-log" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.369419 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ebb774-866f-478e-8ef2-7cfbd141887b" containerName="glance-httpd" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.370629 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.373286 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.384090 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.409147 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ebb774-866f-478e-8ef2-7cfbd141887b-kube-api-access-6t4wh" (OuterVolumeSpecName: "kube-api-access-6t4wh") pod "26ebb774-866f-478e-8ef2-7cfbd141887b" (UID: "26ebb774-866f-478e-8ef2-7cfbd141887b"). InnerVolumeSpecName "kube-api-access-6t4wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.409732 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf974\" (UniqueName: \"kubernetes.io/projected/02a2dc7d-7b15-48fb-96eb-f11ff9023399-kube-api-access-rf974\") pod \"nova-cell1-db-create-m5x9t\" (UID: \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\") " pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.420754 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4hsm\" (UniqueName: \"kubernetes.io/projected/dc14b612-092a-43a4-affb-95ff52a3e82d-kube-api-access-k4hsm\") pod \"nova-api-eb70-account-create-update-tmkjg\" (UID: \"dc14b612-092a-43a4-affb-95ff52a3e82d\") " pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.422827 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.422882 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.423171 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t4wh\" (UniqueName: \"kubernetes.io/projected/26ebb774-866f-478e-8ef2-7cfbd141887b-kube-api-access-6t4wh\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.423591 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9831-account-create-update-t5gr7"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.424820 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.424845 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/26ebb774-866f-478e-8ef2-7cfbd141887b-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.435101 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.441275 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.470389 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.486198 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.508719 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.510689 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.513070 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.513308 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.527766 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkplr\" (UniqueName: \"kubernetes.io/projected/2a032d4f-5922-4085-8751-f413c1087e58-kube-api-access-dkplr\") pod \"nova-cell0-9831-account-create-update-t5gr7\" (UID: \"2a032d4f-5922-4085-8751-f413c1087e58\") " pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.527861 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a032d4f-5922-4085-8751-f413c1087e58-operator-scripts\") pod \"nova-cell0-9831-account-create-update-t5gr7\" (UID: \"2a032d4f-5922-4085-8751-f413c1087e58\") " pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.561984 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26ebb774-866f-478e-8ef2-7cfbd141887b" (UID: "26ebb774-866f-478e-8ef2-7cfbd141887b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.571773 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.618228 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "26ebb774-866f-478e-8ef2-7cfbd141887b" (UID: "26ebb774-866f-478e-8ef2-7cfbd141887b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634118 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkplr\" (UniqueName: \"kubernetes.io/projected/2a032d4f-5922-4085-8751-f413c1087e58-kube-api-access-dkplr\") pod \"nova-cell0-9831-account-create-update-t5gr7\" (UID: \"2a032d4f-5922-4085-8751-f413c1087e58\") " pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634237 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634302 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a032d4f-5922-4085-8751-f413c1087e58-operator-scripts\") pod \"nova-cell0-9831-account-create-update-t5gr7\" (UID: \"2a032d4f-5922-4085-8751-f413c1087e58\") " pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634345 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-scripts\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634414 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634448 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-config-data\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634467 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f508ab5e-133f-469f-9791-3444c10fc527-logs\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634531 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634568 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tglr\" (UniqueName: \"kubernetes.io/projected/f508ab5e-133f-469f-9791-3444c10fc527-kube-api-access-7tglr\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.634586 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f508ab5e-133f-469f-9791-3444c10fc527-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.635430 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.635463 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.635476 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.635783 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a032d4f-5922-4085-8751-f413c1087e58-operator-scripts\") pod \"nova-cell0-9831-account-create-update-t5gr7\" (UID: \"2a032d4f-5922-4085-8751-f413c1087e58\") " pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.641941 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9add26f6-37de-4ffc-9426-fab101257314" path="/var/lib/kubelet/pods/9add26f6-37de-4ffc-9426-fab101257314/volumes" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.642631 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-config-data" (OuterVolumeSpecName: "config-data") pod "26ebb774-866f-478e-8ef2-7cfbd141887b" (UID: "26ebb774-866f-478e-8ef2-7cfbd141887b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.643620 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.643658 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2a5a-account-create-update-dxj42"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.659856 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkplr\" (UniqueName: \"kubernetes.io/projected/2a032d4f-5922-4085-8751-f413c1087e58-kube-api-access-dkplr\") pod \"nova-cell0-9831-account-create-update-t5gr7\" (UID: \"2a032d4f-5922-4085-8751-f413c1087e58\") " pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.671927 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2a5a-account-create-update-dxj42"] Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.672036 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.676945 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.740262 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.742521 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv7l2\" (UniqueName: \"kubernetes.io/projected/330df8bb-e48c-4b61-9cbd-69de9a1a1453-kube-api-access-xv7l2\") pod \"nova-cell1-2a5a-account-create-update-dxj42\" (UID: \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\") " pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.742612 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-scripts\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.742708 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.745218 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-config-data\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.745243 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f508ab5e-133f-469f-9791-3444c10fc527-logs\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.745376 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330df8bb-e48c-4b61-9cbd-69de9a1a1453-operator-scripts\") pod \"nova-cell1-2a5a-account-create-update-dxj42\" (UID: \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\") " pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.745404 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.746122 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tglr\" (UniqueName: \"kubernetes.io/projected/f508ab5e-133f-469f-9791-3444c10fc527-kube-api-access-7tglr\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.746161 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f508ab5e-133f-469f-9791-3444c10fc527-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.746398 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.749878 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ebb774-866f-478e-8ef2-7cfbd141887b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.750209 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f508ab5e-133f-469f-9791-3444c10fc527-logs\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.751916 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-scripts\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.753711 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-config-data\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.753926 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f508ab5e-133f-469f-9791-3444c10fc527-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.764428 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.770953 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.777910 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f508ab5e-133f-469f-9791-3444c10fc527-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.786163 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tglr\" (UniqueName: \"kubernetes.io/projected/f508ab5e-133f-469f-9791-3444c10fc527-kube-api-access-7tglr\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.827264 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"f508ab5e-133f-469f-9791-3444c10fc527\") " pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.851162 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330df8bb-e48c-4b61-9cbd-69de9a1a1453-operator-scripts\") pod \"nova-cell1-2a5a-account-create-update-dxj42\" (UID: \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\") " pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.851294 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv7l2\" (UniqueName: \"kubernetes.io/projected/330df8bb-e48c-4b61-9cbd-69de9a1a1453-kube-api-access-xv7l2\") pod \"nova-cell1-2a5a-account-create-update-dxj42\" (UID: \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\") " pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.852085 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330df8bb-e48c-4b61-9cbd-69de9a1a1453-operator-scripts\") pod \"nova-cell1-2a5a-account-create-update-dxj42\" (UID: \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\") " pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.873538 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 17:21:47 crc kubenswrapper[4853]: I1209 17:21:47.882929 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv7l2\" (UniqueName: \"kubernetes.io/projected/330df8bb-e48c-4b61-9cbd-69de9a1a1453-kube-api-access-xv7l2\") pod \"nova-cell1-2a5a-account-create-update-dxj42\" (UID: \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\") " pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.033480 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.075827 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-v4hn8"] Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.299171 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9g9jw"] Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.327070 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerStarted","Data":"6eb4f8c50a9d029699fac9d1cae6ed56efc8f368f306cc7a395badb3dc89e9ab"} Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.337777 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.338872 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v4hn8" event={"ID":"24b35a27-0dc5-4c08-9505-f81db9987470","Type":"ContainerStarted","Data":"c435090da6df95348a1c03e82a9b13dc5f6df183cb727b1666e46dbdad846a3d"} Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.420750 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.432444 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.443006 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.446653 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.450235 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.451473 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.463129 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.588516 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.588858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.588952 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.589062 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.589100 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9725a66-09e4-4b83-9919-34cb10d5ed3f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.589127 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9725a66-09e4-4b83-9919-34cb10d5ed3f-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.589164 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.589220 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49tfp\" (UniqueName: \"kubernetes.io/projected/b9725a66-09e4-4b83-9919-34cb10d5ed3f-kube-api-access-49tfp\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.691075 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.691125 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.691239 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.691394 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.691435 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9725a66-09e4-4b83-9919-34cb10d5ed3f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.691466 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9725a66-09e4-4b83-9919-34cb10d5ed3f-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.691501 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.691575 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49tfp\" (UniqueName: \"kubernetes.io/projected/b9725a66-09e4-4b83-9919-34cb10d5ed3f-kube-api-access-49tfp\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.693094 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9725a66-09e4-4b83-9919-34cb10d5ed3f-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.693649 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9725a66-09e4-4b83-9919-34cb10d5ed3f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.693828 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.707420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.712409 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.712688 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.722073 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9725a66-09e4-4b83-9919-34cb10d5ed3f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.723742 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49tfp\" (UniqueName: \"kubernetes.io/projected/b9725a66-09e4-4b83-9919-34cb10d5ed3f-kube-api-access-49tfp\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.757231 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-eb70-account-create-update-tmkjg"] Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.770925 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m5x9t"] Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.800908 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9725a66-09e4-4b83-9919-34cb10d5ed3f\") " pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.810428 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.845175 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2a5a-account-create-update-dxj42"] Dec 09 17:21:48 crc kubenswrapper[4853]: I1209 17:21:48.889251 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9831-account-create-update-t5gr7"] Dec 09 17:21:49 crc kubenswrapper[4853]: I1209 17:21:49.067777 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 17:21:49 crc kubenswrapper[4853]: I1209 17:21:49.354041 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerStarted","Data":"5b8e08231f3f0ab6f557ae89fa7a7d9dd8935a95b907ba56b2a9d8829de6ebd9"} Dec 09 17:21:49 crc kubenswrapper[4853]: I1209 17:21:49.356314 4853 generic.go:334] "Generic (PLEG): container finished" podID="24b35a27-0dc5-4c08-9505-f81db9987470" containerID="4f3b63efcf880b14320b4bc145b837f78a8af0e11267d0bc40347a20e9fd0111" exitCode=0 Dec 09 17:21:49 crc kubenswrapper[4853]: I1209 17:21:49.356658 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v4hn8" event={"ID":"24b35a27-0dc5-4c08-9505-f81db9987470","Type":"ContainerDied","Data":"4f3b63efcf880b14320b4bc145b837f78a8af0e11267d0bc40347a20e9fd0111"} Dec 09 17:21:49 crc kubenswrapper[4853]: I1209 17:21:49.360501 4853 generic.go:334] "Generic (PLEG): container finished" podID="f263ce7c-227e-4025-af32-6bf4176920f7" containerID="9cb9af83424083d069cbafddd9de10a27c4f49216ac13bba8f68717ac98f1864" exitCode=0 Dec 09 17:21:49 crc kubenswrapper[4853]: I1209 17:21:49.360561 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9g9jw" event={"ID":"f263ce7c-227e-4025-af32-6bf4176920f7","Type":"ContainerDied","Data":"9cb9af83424083d069cbafddd9de10a27c4f49216ac13bba8f68717ac98f1864"} Dec 09 17:21:49 crc kubenswrapper[4853]: I1209 17:21:49.360612 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9g9jw" event={"ID":"f263ce7c-227e-4025-af32-6bf4176920f7","Type":"ContainerStarted","Data":"3b784cf5186b67a1308820de476a05e6f728101f5873b1fad4d50f1150f9e82f"} Dec 09 17:21:49 crc kubenswrapper[4853]: I1209 17:21:49.581562 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ebb774-866f-478e-8ef2-7cfbd141887b" path="/var/lib/kubelet/pods/26ebb774-866f-478e-8ef2-7cfbd141887b/volumes" Dec 09 17:21:51 crc kubenswrapper[4853]: I1209 17:21:51.010993 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 17:21:51 crc kubenswrapper[4853]: I1209 17:21:51.261470 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:51 crc kubenswrapper[4853]: I1209 17:21:51.262251 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-756c8b85d7-nmj2j" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.516343 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-84b8897886-8dc2w"] Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.518090 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.521450 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.521877 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-tdgvr" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.522003 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.534042 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-84b8897886-8dc2w"] Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.630228 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.630366 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbt8\" (UniqueName: \"kubernetes.io/projected/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-kube-api-access-xdbt8\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.630630 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data-custom\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.630927 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-combined-ca-bundle\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.643556 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5c59b45f9f-x6g2g"] Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.646338 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.648763 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.677634 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c59b45f9f-x6g2g"] Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.735517 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data-custom\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.735573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data-custom\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.735621 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-combined-ca-bundle\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.735715 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.735758 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.735789 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-combined-ca-bundle\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.735825 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdbt8\" (UniqueName: \"kubernetes.io/projected/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-kube-api-access-xdbt8\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.735848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn9v9\" (UniqueName: \"kubernetes.io/projected/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-kube-api-access-hn9v9\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.749655 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-dhffl"] Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.754249 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.765162 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdbt8\" (UniqueName: \"kubernetes.io/projected/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-kube-api-access-xdbt8\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.768532 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-combined-ca-bundle\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.773781 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-694c4fff66-rx8bf"] Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.775382 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.779009 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.779530 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data-custom\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.780780 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data\") pod \"heat-engine-84b8897886-8dc2w\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.821396 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-dhffl"] Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.833111 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-694c4fff66-rx8bf"] Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.844856 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-combined-ca-bundle\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845185 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845206 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845284 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhcg\" (UniqueName: \"kubernetes.io/projected/f140945e-1f28-41d6-b3c5-f09100c204df-kube-api-access-lmhcg\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845336 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845427 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845467 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-combined-ca-bundle\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845618 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn9v9\" (UniqueName: \"kubernetes.io/projected/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-kube-api-access-hn9v9\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845673 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data-custom\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845733 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data-custom\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845846 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbk5p\" (UniqueName: \"kubernetes.io/projected/2931d256-61bc-4a2d-945e-95828ba68e62-kube-api-access-vbk5p\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.845900 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.846002 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.846055 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-config\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.856993 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data-custom\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.858233 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.861799 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.869096 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-combined-ca-bundle\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.881301 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn9v9\" (UniqueName: \"kubernetes.io/projected/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-kube-api-access-hn9v9\") pod \"heat-cfnapi-5c59b45f9f-x6g2g\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.948212 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.948261 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-config\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.948334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-combined-ca-bundle\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.948357 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.948390 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhcg\" (UniqueName: \"kubernetes.io/projected/f140945e-1f28-41d6-b3c5-f09100c204df-kube-api-access-lmhcg\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.948685 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.948764 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.948873 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data-custom\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.951222 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.952550 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbk5p\" (UniqueName: \"kubernetes.io/projected/2931d256-61bc-4a2d-945e-95828ba68e62-kube-api-access-vbk5p\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.952569 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-combined-ca-bundle\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.952660 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.960483 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.961131 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.961143 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-config\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.961207 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.961513 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data-custom\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.965002 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.980348 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbk5p\" (UniqueName: \"kubernetes.io/projected/2931d256-61bc-4a2d-945e-95828ba68e62-kube-api-access-vbk5p\") pod \"heat-api-694c4fff66-rx8bf\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.985741 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhcg\" (UniqueName: \"kubernetes.io/projected/f140945e-1f28-41d6-b3c5-f09100c204df-kube-api-access-lmhcg\") pod \"dnsmasq-dns-7756b9d78c-dhffl\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:53 crc kubenswrapper[4853]: I1209 17:21:53.996224 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:21:54 crc kubenswrapper[4853]: I1209 17:21:54.014499 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:21:54 crc kubenswrapper[4853]: I1209 17:21:54.038282 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:21:56 crc kubenswrapper[4853]: W1209 17:21:56.348769 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc14b612_092a_43a4_affb_95ff52a3e82d.slice/crio-e09d308ce256e05bd1a43bea04654f44669e4af742de995d637beba8d47f6da8 WatchSource:0}: Error finding container e09d308ce256e05bd1a43bea04654f44669e4af742de995d637beba8d47f6da8: Status 404 returned error can't find the container with id e09d308ce256e05bd1a43bea04654f44669e4af742de995d637beba8d47f6da8 Dec 09 17:21:56 crc kubenswrapper[4853]: W1209 17:21:56.367485 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02a2dc7d_7b15_48fb_96eb_f11ff9023399.slice/crio-c0534d17cbfdd6469ffb33efbc62bae4812732a931c57c94f12caf82292c015d WatchSource:0}: Error finding container c0534d17cbfdd6469ffb33efbc62bae4812732a931c57c94f12caf82292c015d: Status 404 returned error can't find the container with id c0534d17cbfdd6469ffb33efbc62bae4812732a931c57c94f12caf82292c015d Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.448581 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" event={"ID":"330df8bb-e48c-4b61-9cbd-69de9a1a1453","Type":"ContainerStarted","Data":"ef06a0ba2202e780b59e3e92470fac956ba3a5560c188eb6cb0b04d507ef2d0e"} Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.453494 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f508ab5e-133f-469f-9791-3444c10fc527","Type":"ContainerStarted","Data":"26974a0951bdd1485a294c63e72f74b5c63a5e56bd636bfe2905b6691a2cf9ab"} Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.455152 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5x9t" event={"ID":"02a2dc7d-7b15-48fb-96eb-f11ff9023399","Type":"ContainerStarted","Data":"c0534d17cbfdd6469ffb33efbc62bae4812732a931c57c94f12caf82292c015d"} Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.458736 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9831-account-create-update-t5gr7" event={"ID":"2a032d4f-5922-4085-8751-f413c1087e58","Type":"ContainerStarted","Data":"6b571d01b2b0d94d4fcca11c9fbe6e0582e9c043d7b80a1caee49cb801da5612"} Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.460133 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eb70-account-create-update-tmkjg" event={"ID":"dc14b612-092a-43a4-affb-95ff52a3e82d","Type":"ContainerStarted","Data":"e09d308ce256e05bd1a43bea04654f44669e4af742de995d637beba8d47f6da8"} Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.469001 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v4hn8" event={"ID":"24b35a27-0dc5-4c08-9505-f81db9987470","Type":"ContainerDied","Data":"c435090da6df95348a1c03e82a9b13dc5f6df183cb727b1666e46dbdad846a3d"} Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.469089 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c435090da6df95348a1c03e82a9b13dc5f6df183cb727b1666e46dbdad846a3d" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.479526 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9g9jw" event={"ID":"f263ce7c-227e-4025-af32-6bf4176920f7","Type":"ContainerDied","Data":"3b784cf5186b67a1308820de476a05e6f728101f5873b1fad4d50f1150f9e82f"} Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.479577 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b784cf5186b67a1308820de476a05e6f728101f5873b1fad4d50f1150f9e82f" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.536624 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.634971 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b35a27-0dc5-4c08-9505-f81db9987470-operator-scripts\") pod \"24b35a27-0dc5-4c08-9505-f81db9987470\" (UID: \"24b35a27-0dc5-4c08-9505-f81db9987470\") " Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.635650 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b35a27-0dc5-4c08-9505-f81db9987470-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24b35a27-0dc5-4c08-9505-f81db9987470" (UID: "24b35a27-0dc5-4c08-9505-f81db9987470"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.635678 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbg4z\" (UniqueName: \"kubernetes.io/projected/24b35a27-0dc5-4c08-9505-f81db9987470-kube-api-access-xbg4z\") pod \"24b35a27-0dc5-4c08-9505-f81db9987470\" (UID: \"24b35a27-0dc5-4c08-9505-f81db9987470\") " Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.638550 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b35a27-0dc5-4c08-9505-f81db9987470-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.647224 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b35a27-0dc5-4c08-9505-f81db9987470-kube-api-access-xbg4z" (OuterVolumeSpecName: "kube-api-access-xbg4z") pod "24b35a27-0dc5-4c08-9505-f81db9987470" (UID: "24b35a27-0dc5-4c08-9505-f81db9987470"). InnerVolumeSpecName "kube-api-access-xbg4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.662359 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.743781 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f263ce7c-227e-4025-af32-6bf4176920f7-operator-scripts\") pod \"f263ce7c-227e-4025-af32-6bf4176920f7\" (UID: \"f263ce7c-227e-4025-af32-6bf4176920f7\") " Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.743939 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzpvj\" (UniqueName: \"kubernetes.io/projected/f263ce7c-227e-4025-af32-6bf4176920f7-kube-api-access-zzpvj\") pod \"f263ce7c-227e-4025-af32-6bf4176920f7\" (UID: \"f263ce7c-227e-4025-af32-6bf4176920f7\") " Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.744618 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbg4z\" (UniqueName: \"kubernetes.io/projected/24b35a27-0dc5-4c08-9505-f81db9987470-kube-api-access-xbg4z\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.744971 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f263ce7c-227e-4025-af32-6bf4176920f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f263ce7c-227e-4025-af32-6bf4176920f7" (UID: "f263ce7c-227e-4025-af32-6bf4176920f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.749912 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f263ce7c-227e-4025-af32-6bf4176920f7-kube-api-access-zzpvj" (OuterVolumeSpecName: "kube-api-access-zzpvj") pod "f263ce7c-227e-4025-af32-6bf4176920f7" (UID: "f263ce7c-227e-4025-af32-6bf4176920f7"). InnerVolumeSpecName "kube-api-access-zzpvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.847165 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f263ce7c-227e-4025-af32-6bf4176920f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:56 crc kubenswrapper[4853]: I1209 17:21:56.847457 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzpvj\" (UniqueName: \"kubernetes.io/projected/f263ce7c-227e-4025-af32-6bf4176920f7-kube-api-access-zzpvj\") on node \"crc\" DevicePath \"\"" Dec 09 17:21:57 crc kubenswrapper[4853]: I1209 17:21:57.054370 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-dhffl"] Dec 09 17:21:57 crc kubenswrapper[4853]: I1209 17:21:57.428176 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c59b45f9f-x6g2g"] Dec 09 17:21:57 crc kubenswrapper[4853]: I1209 17:21:57.540250 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" event={"ID":"f140945e-1f28-41d6-b3c5-f09100c204df","Type":"ContainerStarted","Data":"1192e6c93923c70b0c1557902831ec2268ce63b001c583d4f106280c4344791a"} Dec 09 17:21:57 crc kubenswrapper[4853]: I1209 17:21:57.545812 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:21:57 crc kubenswrapper[4853]: I1209 17:21:57.546072 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:21:57 crc kubenswrapper[4853]: I1209 17:21:57.561203 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-84b8897886-8dc2w"] Dec 09 17:21:57 crc kubenswrapper[4853]: I1209 17:21:57.665394 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 17:21:57 crc kubenswrapper[4853]: I1209 17:21:57.675589 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-694c4fff66-rx8bf"] Dec 09 17:21:57 crc kubenswrapper[4853]: W1209 17:21:57.692858 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9725a66_09e4_4b83_9919_34cb10d5ed3f.slice/crio-4d69253899f622af101a44af66dd12c5fbe931db20aa944513c3413c9ff476fc WatchSource:0}: Error finding container 4d69253899f622af101a44af66dd12c5fbe931db20aa944513c3413c9ff476fc: Status 404 returned error can't find the container with id 4d69253899f622af101a44af66dd12c5fbe931db20aa944513c3413c9ff476fc Dec 09 17:21:57 crc kubenswrapper[4853]: W1209 17:21:57.834358 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2931d256_61bc_4a2d_945e_95828ba68e62.slice/crio-477b7779b3725306885cf89a1ad7dffefd82c1861fc3c431214343eb08e247d5 WatchSource:0}: Error finding container 477b7779b3725306885cf89a1ad7dffefd82c1861fc3c431214343eb08e247d5: Status 404 returned error can't find the container with id 477b7779b3725306885cf89a1ad7dffefd82c1861fc3c431214343eb08e247d5 Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.567852 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9725a66-09e4-4b83-9919-34cb10d5ed3f","Type":"ContainerStarted","Data":"4d69253899f622af101a44af66dd12c5fbe931db20aa944513c3413c9ff476fc"} Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.582700 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" event={"ID":"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f","Type":"ContainerStarted","Data":"4f32f54492df3d0df4bf1f79b530e75027a5f8fd33b1e82d263a29bf2fb56424"} Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.584092 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5x9t" event={"ID":"02a2dc7d-7b15-48fb-96eb-f11ff9023399","Type":"ContainerStarted","Data":"227b4b6a730d1a894ab140ef4bdfde2d72a2ff5877df8e505dec50e56ddec151"} Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.594491 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84b8897886-8dc2w" event={"ID":"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b","Type":"ContainerStarted","Data":"3be7e4855d79dfafec179b24f9ce42049ff897a4fa7c148e18761f428cfc4a38"} Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.595687 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.606527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-694c4fff66-rx8bf" event={"ID":"2931d256-61bc-4a2d-945e-95828ba68e62","Type":"ContainerStarted","Data":"477b7779b3725306885cf89a1ad7dffefd82c1861fc3c431214343eb08e247d5"} Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.617560 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9831-account-create-update-t5gr7" event={"ID":"2a032d4f-5922-4085-8751-f413c1087e58","Type":"ContainerStarted","Data":"259f33b62214f99fae0ce702fb940677f82dc0b688ae70dddb7c1601a28b05e1"} Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.743011 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-m5x9t" podStartSLOduration=11.742983486 podStartE2EDuration="11.742983486s" podCreationTimestamp="2025-12-09 17:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:58.615269733 +0000 UTC m=+1545.550008915" watchObservedRunningTime="2025-12-09 17:21:58.742983486 +0000 UTC m=+1545.677722668" Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.743822 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-84b8897886-8dc2w" podStartSLOduration=5.74381336 podStartE2EDuration="5.74381336s" podCreationTimestamp="2025-12-09 17:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:58.636658711 +0000 UTC m=+1545.571397903" watchObservedRunningTime="2025-12-09 17:21:58.74381336 +0000 UTC m=+1545.678552542" Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.794681 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-eb70-account-create-update-tmkjg" podStartSLOduration=11.794658113 podStartE2EDuration="11.794658113s" podCreationTimestamp="2025-12-09 17:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:58.659687106 +0000 UTC m=+1545.594426278" watchObservedRunningTime="2025-12-09 17:21:58.794658113 +0000 UTC m=+1545.729397295" Dec 09 17:21:58 crc kubenswrapper[4853]: I1209 17:21:58.810978 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-9831-account-create-update-t5gr7" podStartSLOduration=11.810957679 podStartE2EDuration="11.810957679s" podCreationTimestamp="2025-12-09 17:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:58.6780619 +0000 UTC m=+1545.612801082" watchObservedRunningTime="2025-12-09 17:21:58.810957679 +0000 UTC m=+1545.745696861" Dec 09 17:21:59 crc kubenswrapper[4853]: E1209 17:21:59.423224 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf140945e_1f28_41d6_b3c5_f09100c204df.slice/crio-44796b9f1f7a163f31dff7c99dc7bd54dac12fc17e093b81f5bda1022df0aed0.scope\": RecentStats: unable to find data in memory cache]" Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.656658 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f508ab5e-133f-469f-9791-3444c10fc527","Type":"ContainerStarted","Data":"1296150899df9488e3bec3059e1c26d6b153cb06f53fe1876156c7e4ef92df94"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.656707 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f508ab5e-133f-469f-9791-3444c10fc527","Type":"ContainerStarted","Data":"c3dd14e0418cdad895a21f7e191d6834ad3f1aabbaf18d25cb521d843ff46545"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.660579 4853 generic.go:334] "Generic (PLEG): container finished" podID="02a2dc7d-7b15-48fb-96eb-f11ff9023399" containerID="227b4b6a730d1a894ab140ef4bdfde2d72a2ff5877df8e505dec50e56ddec151" exitCode=0 Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.660762 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5x9t" event={"ID":"02a2dc7d-7b15-48fb-96eb-f11ff9023399","Type":"ContainerDied","Data":"227b4b6a730d1a894ab140ef4bdfde2d72a2ff5877df8e505dec50e56ddec151"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.665246 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"24e96bca-760e-4742-823e-5cb3dc9d752e","Type":"ContainerStarted","Data":"b819fe62c9d687ed9ae04578074a915d9865fea54ae7e0197c8c6ff77af25562"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.683377 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84b8897886-8dc2w" event={"ID":"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b","Type":"ContainerStarted","Data":"9035197de58fae44bbd19523d50a1b8ba979ba37389c0e8abdadf8c6a4d93ffd"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.689286 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.689268198 podStartE2EDuration="12.689268198s" podCreationTimestamp="2025-12-09 17:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:59.676836 +0000 UTC m=+1546.611575182" watchObservedRunningTime="2025-12-09 17:21:59.689268198 +0000 UTC m=+1546.624007390" Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.689590 4853 generic.go:334] "Generic (PLEG): container finished" podID="dc14b612-092a-43a4-affb-95ff52a3e82d" containerID="433a4b32e84f22a359dc5036ab9d4b35097553928d581fb72c2f102611449960" exitCode=0 Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.689636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eb70-account-create-update-tmkjg" event={"ID":"dc14b612-092a-43a4-affb-95ff52a3e82d","Type":"ContainerDied","Data":"433a4b32e84f22a359dc5036ab9d4b35097553928d581fb72c2f102611449960"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.703180 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9725a66-09e4-4b83-9919-34cb10d5ed3f","Type":"ContainerStarted","Data":"18130ac0d0d8818be3bd60694e33d415f995b2cc182c8cdfe23fdac75ae4de28"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.710121 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerStarted","Data":"546ee5d9bfc67b94330ded5617368779666204285bce6f1600d2213861bbc012"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.710292 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="ceilometer-central-agent" containerID="cri-o://e1d2382f7519aac7bd68e69844f197665d2db80daa7ae9a14e6504b5c854a07d" gracePeriod=30 Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.710983 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="proxy-httpd" containerID="cri-o://546ee5d9bfc67b94330ded5617368779666204285bce6f1600d2213861bbc012" gracePeriod=30 Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.711032 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="sg-core" containerID="cri-o://5b8e08231f3f0ab6f557ae89fa7a7d9dd8935a95b907ba56b2a9d8829de6ebd9" gracePeriod=30 Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.711221 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="ceilometer-notification-agent" containerID="cri-o://6eb4f8c50a9d029699fac9d1cae6ed56efc8f368f306cc7a395badb3dc89e9ab" gracePeriod=30 Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.711353 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.727900 4853 generic.go:334] "Generic (PLEG): container finished" podID="330df8bb-e48c-4b61-9cbd-69de9a1a1453" containerID="b93c58cded713c656901ad54e28c82d3aa5cc6eb90d4df345fbff3615ea5811b" exitCode=0 Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.728039 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" event={"ID":"330df8bb-e48c-4b61-9cbd-69de9a1a1453","Type":"ContainerDied","Data":"b93c58cded713c656901ad54e28c82d3aa5cc6eb90d4df345fbff3615ea5811b"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.743673 4853 generic.go:334] "Generic (PLEG): container finished" podID="2a032d4f-5922-4085-8751-f413c1087e58" containerID="259f33b62214f99fae0ce702fb940677f82dc0b688ae70dddb7c1601a28b05e1" exitCode=0 Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.743753 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9831-account-create-update-t5gr7" event={"ID":"2a032d4f-5922-4085-8751-f413c1087e58","Type":"ContainerDied","Data":"259f33b62214f99fae0ce702fb940677f82dc0b688ae70dddb7c1601a28b05e1"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.750457 4853 generic.go:334] "Generic (PLEG): container finished" podID="f140945e-1f28-41d6-b3c5-f09100c204df" containerID="44796b9f1f7a163f31dff7c99dc7bd54dac12fc17e093b81f5bda1022df0aed0" exitCode=0 Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.750500 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" event={"ID":"f140945e-1f28-41d6-b3c5-f09100c204df","Type":"ContainerDied","Data":"44796b9f1f7a163f31dff7c99dc7bd54dac12fc17e093b81f5bda1022df0aed0"} Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.757796 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.907988952 podStartE2EDuration="24.757773065s" podCreationTimestamp="2025-12-09 17:21:35 +0000 UTC" firstStartedPulling="2025-12-09 17:21:36.815350827 +0000 UTC m=+1523.750090009" lastFinishedPulling="2025-12-09 17:21:56.66513493 +0000 UTC m=+1543.599874122" observedRunningTime="2025-12-09 17:21:59.720324647 +0000 UTC m=+1546.655063829" watchObservedRunningTime="2025-12-09 17:21:59.757773065 +0000 UTC m=+1546.692512247" Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.795702 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.218843035 podStartE2EDuration="15.795672586s" podCreationTimestamp="2025-12-09 17:21:44 +0000 UTC" firstStartedPulling="2025-12-09 17:21:45.32671309 +0000 UTC m=+1532.261452272" lastFinishedPulling="2025-12-09 17:21:56.903542641 +0000 UTC m=+1543.838281823" observedRunningTime="2025-12-09 17:21:59.742832497 +0000 UTC m=+1546.677571679" watchObservedRunningTime="2025-12-09 17:21:59.795672586 +0000 UTC m=+1546.730411768" Dec 09 17:21:59 crc kubenswrapper[4853]: I1209 17:21:59.816850 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.816828728 podStartE2EDuration="11.816828728s" podCreationTimestamp="2025-12-09 17:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:21:59.777479747 +0000 UTC m=+1546.712218919" watchObservedRunningTime="2025-12-09 17:21:59.816828728 +0000 UTC m=+1546.751567920" Dec 09 17:22:00 crc kubenswrapper[4853]: I1209 17:22:00.764909 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9725a66-09e4-4b83-9919-34cb10d5ed3f","Type":"ContainerStarted","Data":"47bcd8769090cc4f8fc44a841f6e343c21fce6e07983b31415f97d74225b066c"} Dec 09 17:22:00 crc kubenswrapper[4853]: I1209 17:22:00.770561 4853 generic.go:334] "Generic (PLEG): container finished" podID="36a546d1-768b-4900-abe2-a2d9876b2557" containerID="546ee5d9bfc67b94330ded5617368779666204285bce6f1600d2213861bbc012" exitCode=0 Dec 09 17:22:00 crc kubenswrapper[4853]: I1209 17:22:00.770608 4853 generic.go:334] "Generic (PLEG): container finished" podID="36a546d1-768b-4900-abe2-a2d9876b2557" containerID="5b8e08231f3f0ab6f557ae89fa7a7d9dd8935a95b907ba56b2a9d8829de6ebd9" exitCode=2 Dec 09 17:22:00 crc kubenswrapper[4853]: I1209 17:22:00.770622 4853 generic.go:334] "Generic (PLEG): container finished" podID="36a546d1-768b-4900-abe2-a2d9876b2557" containerID="e1d2382f7519aac7bd68e69844f197665d2db80daa7ae9a14e6504b5c854a07d" exitCode=0 Dec 09 17:22:00 crc kubenswrapper[4853]: I1209 17:22:00.771254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerDied","Data":"546ee5d9bfc67b94330ded5617368779666204285bce6f1600d2213861bbc012"} Dec 09 17:22:00 crc kubenswrapper[4853]: I1209 17:22:00.771345 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerDied","Data":"5b8e08231f3f0ab6f557ae89fa7a7d9dd8935a95b907ba56b2a9d8829de6ebd9"} Dec 09 17:22:00 crc kubenswrapper[4853]: I1209 17:22:00.771360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerDied","Data":"e1d2382f7519aac7bd68e69844f197665d2db80daa7ae9a14e6504b5c854a07d"} Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.254378 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-8648cdff54-k9lsq"] Dec 09 17:22:01 crc kubenswrapper[4853]: E1209 17:22:01.255035 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b35a27-0dc5-4c08-9505-f81db9987470" containerName="mariadb-database-create" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.255053 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b35a27-0dc5-4c08-9505-f81db9987470" containerName="mariadb-database-create" Dec 09 17:22:01 crc kubenswrapper[4853]: E1209 17:22:01.255119 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f263ce7c-227e-4025-af32-6bf4176920f7" containerName="mariadb-database-create" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.255126 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f263ce7c-227e-4025-af32-6bf4176920f7" containerName="mariadb-database-create" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.255364 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f263ce7c-227e-4025-af32-6bf4176920f7" containerName="mariadb-database-create" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.255383 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b35a27-0dc5-4c08-9505-f81db9987470" containerName="mariadb-database-create" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.256365 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.275020 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-85746db47f-n5p9l"] Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.281495 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.307685 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-76d69ccdc-dctkd"] Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.309590 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372087 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5rv\" (UniqueName: \"kubernetes.io/projected/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-kube-api-access-mz5rv\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372403 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b00ae75-9222-4cf4-a896-27abacad2ae0-combined-ca-bundle\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372432 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnq87\" (UniqueName: \"kubernetes.io/projected/2b00ae75-9222-4cf4-a896-27abacad2ae0-kube-api-access-wnq87\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372483 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data-custom\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372522 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372558 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfpf\" (UniqueName: \"kubernetes.io/projected/c583189f-34ec-4aea-a4a3-ce2600a3c07d-kube-api-access-spfpf\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372672 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-combined-ca-bundle\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372689 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b00ae75-9222-4cf4-a896-27abacad2ae0-config-data-custom\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372716 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-combined-ca-bundle\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372777 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372811 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data-custom\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.372838 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b00ae75-9222-4cf4-a896-27abacad2ae0-config-data\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.376956 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8648cdff54-k9lsq"] Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.411691 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-85746db47f-n5p9l"] Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.436396 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-76d69ccdc-dctkd"] Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475002 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-combined-ca-bundle\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475042 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b00ae75-9222-4cf4-a896-27abacad2ae0-config-data-custom\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475070 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-combined-ca-bundle\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475122 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475144 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data-custom\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475162 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b00ae75-9222-4cf4-a896-27abacad2ae0-config-data\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475185 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz5rv\" (UniqueName: \"kubernetes.io/projected/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-kube-api-access-mz5rv\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475212 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b00ae75-9222-4cf4-a896-27abacad2ae0-combined-ca-bundle\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475231 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnq87\" (UniqueName: \"kubernetes.io/projected/2b00ae75-9222-4cf4-a896-27abacad2ae0-kube-api-access-wnq87\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475269 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data-custom\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475303 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.475329 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spfpf\" (UniqueName: \"kubernetes.io/projected/c583189f-34ec-4aea-a4a3-ce2600a3c07d-kube-api-access-spfpf\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.486710 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.499059 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-combined-ca-bundle\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.500058 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b00ae75-9222-4cf4-a896-27abacad2ae0-config-data\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.501881 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b00ae75-9222-4cf4-a896-27abacad2ae0-config-data-custom\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.523035 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data-custom\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.529560 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data-custom\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.531589 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b00ae75-9222-4cf4-a896-27abacad2ae0-combined-ca-bundle\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.531808 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.533433 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnq87\" (UniqueName: \"kubernetes.io/projected/2b00ae75-9222-4cf4-a896-27abacad2ae0-kube-api-access-wnq87\") pod \"heat-engine-8648cdff54-k9lsq\" (UID: \"2b00ae75-9222-4cf4-a896-27abacad2ae0\") " pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.533511 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spfpf\" (UniqueName: \"kubernetes.io/projected/c583189f-34ec-4aea-a4a3-ce2600a3c07d-kube-api-access-spfpf\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.534196 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz5rv\" (UniqueName: \"kubernetes.io/projected/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-kube-api-access-mz5rv\") pod \"heat-api-85746db47f-n5p9l\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.535647 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-combined-ca-bundle\") pod \"heat-cfnapi-76d69ccdc-dctkd\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.605173 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.653292 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.657937 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.802864 4853 generic.go:334] "Generic (PLEG): container finished" podID="36a546d1-768b-4900-abe2-a2d9876b2557" containerID="6eb4f8c50a9d029699fac9d1cae6ed56efc8f368f306cc7a395badb3dc89e9ab" exitCode=0 Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.802934 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerDied","Data":"6eb4f8c50a9d029699fac9d1cae6ed56efc8f368f306cc7a395badb3dc89e9ab"} Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.806933 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" event={"ID":"330df8bb-e48c-4b61-9cbd-69de9a1a1453","Type":"ContainerDied","Data":"ef06a0ba2202e780b59e3e92470fac956ba3a5560c188eb6cb0b04d507ef2d0e"} Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.806972 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef06a0ba2202e780b59e3e92470fac956ba3a5560c188eb6cb0b04d507ef2d0e" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.809267 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5x9t" event={"ID":"02a2dc7d-7b15-48fb-96eb-f11ff9023399","Type":"ContainerDied","Data":"c0534d17cbfdd6469ffb33efbc62bae4812732a931c57c94f12caf82292c015d"} Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.809299 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0534d17cbfdd6469ffb33efbc62bae4812732a931c57c94f12caf82292c015d" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.812202 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eb70-account-create-update-tmkjg" event={"ID":"dc14b612-092a-43a4-affb-95ff52a3e82d","Type":"ContainerDied","Data":"e09d308ce256e05bd1a43bea04654f44669e4af742de995d637beba8d47f6da8"} Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.812272 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09d308ce256e05bd1a43bea04654f44669e4af742de995d637beba8d47f6da8" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.926214 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.984109 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.984742 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:22:01 crc kubenswrapper[4853]: I1209 17:22:01.996643 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.006802 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a2dc7d-7b15-48fb-96eb-f11ff9023399-operator-scripts\") pod \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\" (UID: \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.007716 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf974\" (UniqueName: \"kubernetes.io/projected/02a2dc7d-7b15-48fb-96eb-f11ff9023399-kube-api-access-rf974\") pod \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\" (UID: \"02a2dc7d-7b15-48fb-96eb-f11ff9023399\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.013125 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a2dc7d-7b15-48fb-96eb-f11ff9023399-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02a2dc7d-7b15-48fb-96eb-f11ff9023399" (UID: "02a2dc7d-7b15-48fb-96eb-f11ff9023399"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.026718 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a2dc7d-7b15-48fb-96eb-f11ff9023399-kube-api-access-rf974" (OuterVolumeSpecName: "kube-api-access-rf974") pod "02a2dc7d-7b15-48fb-96eb-f11ff9023399" (UID: "02a2dc7d-7b15-48fb-96eb-f11ff9023399"). InnerVolumeSpecName "kube-api-access-rf974". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.114776 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc14b612-092a-43a4-affb-95ff52a3e82d-operator-scripts\") pod \"dc14b612-092a-43a4-affb-95ff52a3e82d\" (UID: \"dc14b612-092a-43a4-affb-95ff52a3e82d\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.114858 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4hsm\" (UniqueName: \"kubernetes.io/projected/dc14b612-092a-43a4-affb-95ff52a3e82d-kube-api-access-k4hsm\") pod \"dc14b612-092a-43a4-affb-95ff52a3e82d\" (UID: \"dc14b612-092a-43a4-affb-95ff52a3e82d\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.114980 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a032d4f-5922-4085-8751-f413c1087e58-operator-scripts\") pod \"2a032d4f-5922-4085-8751-f413c1087e58\" (UID: \"2a032d4f-5922-4085-8751-f413c1087e58\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.114999 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv7l2\" (UniqueName: \"kubernetes.io/projected/330df8bb-e48c-4b61-9cbd-69de9a1a1453-kube-api-access-xv7l2\") pod \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\" (UID: \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.115051 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330df8bb-e48c-4b61-9cbd-69de9a1a1453-operator-scripts\") pod \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\" (UID: \"330df8bb-e48c-4b61-9cbd-69de9a1a1453\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.115068 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkplr\" (UniqueName: \"kubernetes.io/projected/2a032d4f-5922-4085-8751-f413c1087e58-kube-api-access-dkplr\") pod \"2a032d4f-5922-4085-8751-f413c1087e58\" (UID: \"2a032d4f-5922-4085-8751-f413c1087e58\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.115960 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf974\" (UniqueName: \"kubernetes.io/projected/02a2dc7d-7b15-48fb-96eb-f11ff9023399-kube-api-access-rf974\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.115980 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02a2dc7d-7b15-48fb-96eb-f11ff9023399-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.116228 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc14b612-092a-43a4-affb-95ff52a3e82d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc14b612-092a-43a4-affb-95ff52a3e82d" (UID: "dc14b612-092a-43a4-affb-95ff52a3e82d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.116314 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a032d4f-5922-4085-8751-f413c1087e58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a032d4f-5922-4085-8751-f413c1087e58" (UID: "2a032d4f-5922-4085-8751-f413c1087e58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.116754 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/330df8bb-e48c-4b61-9cbd-69de9a1a1453-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "330df8bb-e48c-4b61-9cbd-69de9a1a1453" (UID: "330df8bb-e48c-4b61-9cbd-69de9a1a1453"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.129302 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330df8bb-e48c-4b61-9cbd-69de9a1a1453-kube-api-access-xv7l2" (OuterVolumeSpecName: "kube-api-access-xv7l2") pod "330df8bb-e48c-4b61-9cbd-69de9a1a1453" (UID: "330df8bb-e48c-4b61-9cbd-69de9a1a1453"). InnerVolumeSpecName "kube-api-access-xv7l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.144363 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc14b612-092a-43a4-affb-95ff52a3e82d-kube-api-access-k4hsm" (OuterVolumeSpecName: "kube-api-access-k4hsm") pod "dc14b612-092a-43a4-affb-95ff52a3e82d" (UID: "dc14b612-092a-43a4-affb-95ff52a3e82d"). InnerVolumeSpecName "kube-api-access-k4hsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.144495 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a032d4f-5922-4085-8751-f413c1087e58-kube-api-access-dkplr" (OuterVolumeSpecName: "kube-api-access-dkplr") pod "2a032d4f-5922-4085-8751-f413c1087e58" (UID: "2a032d4f-5922-4085-8751-f413c1087e58"). InnerVolumeSpecName "kube-api-access-dkplr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.218477 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330df8bb-e48c-4b61-9cbd-69de9a1a1453-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.218949 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkplr\" (UniqueName: \"kubernetes.io/projected/2a032d4f-5922-4085-8751-f413c1087e58-kube-api-access-dkplr\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.218966 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc14b612-092a-43a4-affb-95ff52a3e82d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.218985 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4hsm\" (UniqueName: \"kubernetes.io/projected/dc14b612-092a-43a4-affb-95ff52a3e82d-kube-api-access-k4hsm\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.218997 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a032d4f-5922-4085-8751-f413c1087e58-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.219010 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv7l2\" (UniqueName: \"kubernetes.io/projected/330df8bb-e48c-4b61-9cbd-69de9a1a1453-kube-api-access-xv7l2\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.249166 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.321012 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-run-httpd\") pod \"36a546d1-768b-4900-abe2-a2d9876b2557\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.321099 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-log-httpd\") pod \"36a546d1-768b-4900-abe2-a2d9876b2557\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.321141 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-combined-ca-bundle\") pod \"36a546d1-768b-4900-abe2-a2d9876b2557\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.321248 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-config-data\") pod \"36a546d1-768b-4900-abe2-a2d9876b2557\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.321344 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-scripts\") pod \"36a546d1-768b-4900-abe2-a2d9876b2557\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.321440 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-sg-core-conf-yaml\") pod \"36a546d1-768b-4900-abe2-a2d9876b2557\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.321465 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8nks\" (UniqueName: \"kubernetes.io/projected/36a546d1-768b-4900-abe2-a2d9876b2557-kube-api-access-v8nks\") pod \"36a546d1-768b-4900-abe2-a2d9876b2557\" (UID: \"36a546d1-768b-4900-abe2-a2d9876b2557\") " Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.323049 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "36a546d1-768b-4900-abe2-a2d9876b2557" (UID: "36a546d1-768b-4900-abe2-a2d9876b2557"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.323579 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "36a546d1-768b-4900-abe2-a2d9876b2557" (UID: "36a546d1-768b-4900-abe2-a2d9876b2557"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.329828 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a546d1-768b-4900-abe2-a2d9876b2557-kube-api-access-v8nks" (OuterVolumeSpecName: "kube-api-access-v8nks") pod "36a546d1-768b-4900-abe2-a2d9876b2557" (UID: "36a546d1-768b-4900-abe2-a2d9876b2557"). InnerVolumeSpecName "kube-api-access-v8nks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.330396 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-scripts" (OuterVolumeSpecName: "scripts") pod "36a546d1-768b-4900-abe2-a2d9876b2557" (UID: "36a546d1-768b-4900-abe2-a2d9876b2557"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.426777 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.426810 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36a546d1-768b-4900-abe2-a2d9876b2557-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.426819 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.426852 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8nks\" (UniqueName: \"kubernetes.io/projected/36a546d1-768b-4900-abe2-a2d9876b2557-kube-api-access-v8nks\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.429986 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "36a546d1-768b-4900-abe2-a2d9876b2557" (UID: "36a546d1-768b-4900-abe2-a2d9876b2557"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.530762 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.587748 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36a546d1-768b-4900-abe2-a2d9876b2557" (UID: "36a546d1-768b-4900-abe2-a2d9876b2557"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.595945 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-76d69ccdc-dctkd"] Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.634474 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.638074 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-config-data" (OuterVolumeSpecName: "config-data") pod "36a546d1-768b-4900-abe2-a2d9876b2557" (UID: "36a546d1-768b-4900-abe2-a2d9876b2557"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.737101 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36a546d1-768b-4900-abe2-a2d9876b2557-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.853720 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36a546d1-768b-4900-abe2-a2d9876b2557","Type":"ContainerDied","Data":"c14433b20c46b2bedd92a9120481896a5e246482118c020b16d7a5f20505be88"} Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.853783 4853 scope.go:117] "RemoveContainer" containerID="546ee5d9bfc67b94330ded5617368779666204285bce6f1600d2213861bbc012" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.853968 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.881272 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" event={"ID":"f140945e-1f28-41d6-b3c5-f09100c204df","Type":"ContainerStarted","Data":"9e830dd7cbcf906cc5234eb68a6225a40d14a5c80b3f6c6211654aaa99d69f15"} Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.881350 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.894707 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" event={"ID":"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f","Type":"ContainerStarted","Data":"112ba17d65cce5b1f8e8e55ad0bf9610cf345899cf3e9f10ceaf5698c86a2f65"} Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.894792 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.907361 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9831-account-create-update-t5gr7" event={"ID":"2a032d4f-5922-4085-8751-f413c1087e58","Type":"ContainerDied","Data":"6b571d01b2b0d94d4fcca11c9fbe6e0582e9c043d7b80a1caee49cb801da5612"} Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.907487 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b571d01b2b0d94d4fcca11c9fbe6e0582e9c043d7b80a1caee49cb801da5612" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.907563 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9831-account-create-update-t5gr7" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.931702 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-85746db47f-n5p9l"] Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.957448 4853 scope.go:117] "RemoveContainer" containerID="5b8e08231f3f0ab6f557ae89fa7a7d9dd8935a95b907ba56b2a9d8829de6ebd9" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.997792 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-694c4fff66-rx8bf" event={"ID":"2931d256-61bc-4a2d-945e-95828ba68e62","Type":"ContainerStarted","Data":"e63625777ba013b969f53ed93d2854dcdaa1be857780e9a06154c646cc4cc96a"} Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.998203 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:22:02 crc kubenswrapper[4853]: I1209 17:22:02.998227 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.003155 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2a5a-account-create-update-dxj42" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.003279 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" event={"ID":"c583189f-34ec-4aea-a4a3-ce2600a3c07d","Type":"ContainerStarted","Data":"5e698c9b38f9e49f0781307e193a2e865bdfe70dd7f2ca5b1d00a254f5a0cb3d"} Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.003340 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5x9t" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.003427 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eb70-account-create-update-tmkjg" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.017894 4853 scope.go:117] "RemoveContainer" containerID="6eb4f8c50a9d029699fac9d1cae6ed56efc8f368f306cc7a395badb3dc89e9ab" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.077805 4853 scope.go:117] "RemoveContainer" containerID="e1d2382f7519aac7bd68e69844f197665d2db80daa7ae9a14e6504b5c854a07d" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.098406 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8648cdff54-k9lsq"] Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.119683 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.141256 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:03 crc kubenswrapper[4853]: E1209 17:22:03.141854 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a2dc7d-7b15-48fb-96eb-f11ff9023399" containerName="mariadb-database-create" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.141873 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a2dc7d-7b15-48fb-96eb-f11ff9023399" containerName="mariadb-database-create" Dec 09 17:22:03 crc kubenswrapper[4853]: E1209 17:22:03.141891 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="sg-core" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.141898 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="sg-core" Dec 09 17:22:03 crc kubenswrapper[4853]: E1209 17:22:03.141910 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc14b612-092a-43a4-affb-95ff52a3e82d" containerName="mariadb-account-create-update" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.141918 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc14b612-092a-43a4-affb-95ff52a3e82d" containerName="mariadb-account-create-update" Dec 09 17:22:03 crc kubenswrapper[4853]: E1209 17:22:03.141932 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330df8bb-e48c-4b61-9cbd-69de9a1a1453" containerName="mariadb-account-create-update" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.141940 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="330df8bb-e48c-4b61-9cbd-69de9a1a1453" containerName="mariadb-account-create-update" Dec 09 17:22:03 crc kubenswrapper[4853]: E1209 17:22:03.141977 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="proxy-httpd" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.141983 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="proxy-httpd" Dec 09 17:22:03 crc kubenswrapper[4853]: E1209 17:22:03.141998 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a032d4f-5922-4085-8751-f413c1087e58" containerName="mariadb-account-create-update" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142004 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a032d4f-5922-4085-8751-f413c1087e58" containerName="mariadb-account-create-update" Dec 09 17:22:03 crc kubenswrapper[4853]: E1209 17:22:03.142017 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="ceilometer-central-agent" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142025 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="ceilometer-central-agent" Dec 09 17:22:03 crc kubenswrapper[4853]: E1209 17:22:03.142040 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="ceilometer-notification-agent" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142047 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="ceilometer-notification-agent" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142283 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="proxy-httpd" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142306 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="sg-core" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142314 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="330df8bb-e48c-4b61-9cbd-69de9a1a1453" containerName="mariadb-account-create-update" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142325 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="ceilometer-notification-agent" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142338 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a2dc7d-7b15-48fb-96eb-f11ff9023399" containerName="mariadb-database-create" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142350 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc14b612-092a-43a4-affb-95ff52a3e82d" containerName="mariadb-account-create-update" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142362 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a032d4f-5922-4085-8751-f413c1087e58" containerName="mariadb-account-create-update" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.142371 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" containerName="ceilometer-central-agent" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.145089 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.148070 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.149201 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.157870 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" podStartSLOduration=10.157853064 podStartE2EDuration="10.157853064s" podCreationTimestamp="2025-12-09 17:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:22:02.998522715 +0000 UTC m=+1549.933261917" watchObservedRunningTime="2025-12-09 17:22:03.157853064 +0000 UTC m=+1550.092592246" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.186086 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.191942 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-694c4fff66-rx8bf" podStartSLOduration=6.075148591 podStartE2EDuration="10.191923987s" podCreationTimestamp="2025-12-09 17:21:53 +0000 UTC" firstStartedPulling="2025-12-09 17:21:57.851831158 +0000 UTC m=+1544.786570340" lastFinishedPulling="2025-12-09 17:22:01.968606554 +0000 UTC m=+1548.903345736" observedRunningTime="2025-12-09 17:22:03.032152226 +0000 UTC m=+1549.966891408" watchObservedRunningTime="2025-12-09 17:22:03.191923987 +0000 UTC m=+1550.126663169" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.208429 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnhr\" (UniqueName: \"kubernetes.io/projected/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-kube-api-access-6nnhr\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.208473 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.208502 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-scripts\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.208575 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-log-httpd\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.208616 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.208680 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-run-httpd\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.208706 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-config-data\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.224161 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" podStartSLOduration=5.917887261 podStartE2EDuration="10.224143919s" podCreationTimestamp="2025-12-09 17:21:53 +0000 UTC" firstStartedPulling="2025-12-09 17:21:57.537753969 +0000 UTC m=+1544.472493151" lastFinishedPulling="2025-12-09 17:22:01.844010627 +0000 UTC m=+1548.778749809" observedRunningTime="2025-12-09 17:22:03.091403515 +0000 UTC m=+1550.026142697" watchObservedRunningTime="2025-12-09 17:22:03.224143919 +0000 UTC m=+1550.158883101" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.315510 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-config-data\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.319515 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nnhr\" (UniqueName: \"kubernetes.io/projected/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-kube-api-access-6nnhr\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.319584 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.319636 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-scripts\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.319863 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-log-httpd\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.319923 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.320119 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-run-httpd\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.320879 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-run-httpd\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.325797 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-log-httpd\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.348533 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-scripts\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.348891 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.351865 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-config-data\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.352789 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.363449 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nnhr\" (UniqueName: \"kubernetes.io/projected/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-kube-api-access-6nnhr\") pod \"ceilometer-0\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.474951 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.605071 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a546d1-768b-4900-abe2-a2d9876b2557" path="/var/lib/kubelet/pods/36a546d1-768b-4900-abe2-a2d9876b2557/volumes" Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.963769 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5c59b45f9f-x6g2g"] Dec 09 17:22:03 crc kubenswrapper[4853]: I1209 17:22:03.988957 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-694c4fff66-rx8bf"] Dec 09 17:22:04 crc kubenswrapper[4853]: W1209 17:22:04.048911 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf3ebbb_a614_4447_a448_f6dbf2ab5173.slice/crio-cc0fc8d85a000510a51b9164662c3dbb8004a11749610e5a53211928f532f3e6 WatchSource:0}: Error finding container cc0fc8d85a000510a51b9164662c3dbb8004a11749610e5a53211928f532f3e6: Status 404 returned error can't find the container with id cc0fc8d85a000510a51b9164662c3dbb8004a11749610e5a53211928f532f3e6 Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.053970 4853 generic.go:334] "Generic (PLEG): container finished" podID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" containerID="220307eb82c44fe2d9476b6c4637b9a3608437d993a41dde74699cc13f4c5c8e" exitCode=1 Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.054062 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85746db47f-n5p9l" event={"ID":"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67","Type":"ContainerDied","Data":"220307eb82c44fe2d9476b6c4637b9a3608437d993a41dde74699cc13f4c5c8e"} Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.054086 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85746db47f-n5p9l" event={"ID":"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67","Type":"ContainerStarted","Data":"39ddd99e789dab3ba2af760b84e53ede6905d7cf5040dd06eff3521cdae76d69"} Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.054843 4853 scope.go:117] "RemoveContainer" containerID="220307eb82c44fe2d9476b6c4637b9a3608437d993a41dde74699cc13f4c5c8e" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.057195 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.094177 4853 generic.go:334] "Generic (PLEG): container finished" podID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" containerID="fdf74e86593e46b9e2aa7e1dbef75c28f6cb2b1815e738e7eebc4ee1c4165537" exitCode=1 Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.094255 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" event={"ID":"c583189f-34ec-4aea-a4a3-ce2600a3c07d","Type":"ContainerDied","Data":"fdf74e86593e46b9e2aa7e1dbef75c28f6cb2b1815e738e7eebc4ee1c4165537"} Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.094985 4853 scope.go:117] "RemoveContainer" containerID="fdf74e86593e46b9e2aa7e1dbef75c28f6cb2b1815e738e7eebc4ee1c4165537" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.097783 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6b4ff486d-bdppg"] Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.099953 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.102880 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.103175 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.115497 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8648cdff54-k9lsq" event={"ID":"2b00ae75-9222-4cf4-a896-27abacad2ae0","Type":"ContainerStarted","Data":"ebfed883e9ed3d9e3ed579d9554c5a95f4ed70ec5ede4a186aeee777f2481a03"} Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.115536 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8648cdff54-k9lsq" event={"ID":"2b00ae75-9222-4cf4-a896-27abacad2ae0","Type":"ContainerStarted","Data":"b499dda442333bec6fba92ba2d90fdfc59b82e3947cf8783c6c78d1756e91a86"} Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.115549 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.119976 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-fb7c95bc-v962c"] Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.121613 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.131629 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.131787 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.139977 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fb7c95bc-v962c"] Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.148522 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srrsp\" (UniqueName: \"kubernetes.io/projected/33db0244-869a-4927-87c3-092a2aae9d4a-kube-api-access-srrsp\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.148764 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-config-data\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.148848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-config-data-custom\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.148900 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-combined-ca-bundle\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.148942 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-internal-tls-certs\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.149120 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-config-data\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.149265 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-internal-tls-certs\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.149346 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-public-tls-certs\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.149410 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-public-tls-certs\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.149481 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-config-data-custom\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.149517 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkv48\" (UniqueName: \"kubernetes.io/projected/40b99c71-7761-4134-b7f4-021564f209f5-kube-api-access-rkv48\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.149643 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-combined-ca-bundle\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.165263 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b4ff486d-bdppg"] Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.209863 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-8648cdff54-k9lsq" podStartSLOduration=3.209844804 podStartE2EDuration="3.209844804s" podCreationTimestamp="2025-12-09 17:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:22:04.165222465 +0000 UTC m=+1551.099961637" watchObservedRunningTime="2025-12-09 17:22:04.209844804 +0000 UTC m=+1551.144583986" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.250561 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-config-data-custom\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.250620 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-combined-ca-bundle\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.250637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-internal-tls-certs\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.250674 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-config-data\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.251497 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-internal-tls-certs\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.251542 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-public-tls-certs\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.251585 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-public-tls-certs\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.251629 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-config-data-custom\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.251647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkv48\" (UniqueName: \"kubernetes.io/projected/40b99c71-7761-4134-b7f4-021564f209f5-kube-api-access-rkv48\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.251677 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-combined-ca-bundle\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.251713 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srrsp\" (UniqueName: \"kubernetes.io/projected/33db0244-869a-4927-87c3-092a2aae9d4a-kube-api-access-srrsp\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.251771 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-config-data\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.269242 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-config-data\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.299304 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-internal-tls-certs\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.299823 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-combined-ca-bundle\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.301096 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-public-tls-certs\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.301585 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-combined-ca-bundle\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.313270 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-public-tls-certs\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.313381 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-internal-tls-certs\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.314465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-config-data\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.315433 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40b99c71-7761-4134-b7f4-021564f209f5-config-data-custom\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.322207 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33db0244-869a-4927-87c3-092a2aae9d4a-config-data-custom\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.323649 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkv48\" (UniqueName: \"kubernetes.io/projected/40b99c71-7761-4134-b7f4-021564f209f5-kube-api-access-rkv48\") pod \"heat-cfnapi-fb7c95bc-v962c\" (UID: \"40b99c71-7761-4134-b7f4-021564f209f5\") " pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.326552 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srrsp\" (UniqueName: \"kubernetes.io/projected/33db0244-869a-4927-87c3-092a2aae9d4a-kube-api-access-srrsp\") pod \"heat-api-6b4ff486d-bdppg\" (UID: \"33db0244-869a-4927-87c3-092a2aae9d4a\") " pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.471843 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:04 crc kubenswrapper[4853]: I1209 17:22:04.507880 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.107433 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b4ff486d-bdppg"] Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.137402 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerStarted","Data":"cc0fc8d85a000510a51b9164662c3dbb8004a11749610e5a53211928f532f3e6"} Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.143976 4853 generic.go:334] "Generic (PLEG): container finished" podID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" containerID="cb51dfd75abb5c08a50f14caf01296efa89724a81d991dbf134fb6a2819e0a93" exitCode=1 Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.144063 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85746db47f-n5p9l" event={"ID":"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67","Type":"ContainerDied","Data":"cb51dfd75abb5c08a50f14caf01296efa89724a81d991dbf134fb6a2819e0a93"} Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.144093 4853 scope.go:117] "RemoveContainer" containerID="220307eb82c44fe2d9476b6c4637b9a3608437d993a41dde74699cc13f4c5c8e" Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.145716 4853 scope.go:117] "RemoveContainer" containerID="cb51dfd75abb5c08a50f14caf01296efa89724a81d991dbf134fb6a2819e0a93" Dec 09 17:22:05 crc kubenswrapper[4853]: E1209 17:22:05.146168 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-85746db47f-n5p9l_openstack(bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67)\"" pod="openstack/heat-api-85746db47f-n5p9l" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.150884 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" event={"ID":"c583189f-34ec-4aea-a4a3-ce2600a3c07d","Type":"ContainerDied","Data":"a94e6308fded3515eea03b5ce99878e35de576ea5be910bc338157cc29cd553a"} Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.150896 4853 generic.go:334] "Generic (PLEG): container finished" podID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" containerID="a94e6308fded3515eea03b5ce99878e35de576ea5be910bc338157cc29cd553a" exitCode=1 Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.151557 4853 scope.go:117] "RemoveContainer" containerID="a94e6308fded3515eea03b5ce99878e35de576ea5be910bc338157cc29cd553a" Dec 09 17:22:05 crc kubenswrapper[4853]: E1209 17:22:05.151831 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-76d69ccdc-dctkd_openstack(c583189f-34ec-4aea-a4a3-ce2600a3c07d)\"" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.154058 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" podUID="8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" containerName="heat-cfnapi" containerID="cri-o://112ba17d65cce5b1f8e8e55ad0bf9610cf345899cf3e9f10ceaf5698c86a2f65" gracePeriod=60 Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.154942 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b4ff486d-bdppg" event={"ID":"33db0244-869a-4927-87c3-092a2aae9d4a","Type":"ContainerStarted","Data":"a51a4e0bd70904a7e145fed53e4a56c650beea47dc10e91fca4af5e3e34d4fba"} Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.156286 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-694c4fff66-rx8bf" podUID="2931d256-61bc-4a2d-945e-95828ba68e62" containerName="heat-api" containerID="cri-o://e63625777ba013b969f53ed93d2854dcdaa1be857780e9a06154c646cc4cc96a" gracePeriod=60 Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.261770 4853 scope.go:117] "RemoveContainer" containerID="fdf74e86593e46b9e2aa7e1dbef75c28f6cb2b1815e738e7eebc4ee1c4165537" Dec 09 17:22:05 crc kubenswrapper[4853]: I1209 17:22:05.279395 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-fb7c95bc-v962c"] Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.183615 4853 scope.go:117] "RemoveContainer" containerID="cb51dfd75abb5c08a50f14caf01296efa89724a81d991dbf134fb6a2819e0a93" Dec 09 17:22:06 crc kubenswrapper[4853]: E1209 17:22:06.184576 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-85746db47f-n5p9l_openstack(bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67)\"" pod="openstack/heat-api-85746db47f-n5p9l" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.192938 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fb7c95bc-v962c" event={"ID":"40b99c71-7761-4134-b7f4-021564f209f5","Type":"ContainerStarted","Data":"7f9c9fb8bd5a4a30a2c0d0e1d00d2f7d2cc4b871abc27ba78ccf8135fac73964"} Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.193000 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-fb7c95bc-v962c" event={"ID":"40b99c71-7761-4134-b7f4-021564f209f5","Type":"ContainerStarted","Data":"b755019081b4dcdfee9681011970fecc37abb43de5bcedf9a6df7d4dc0f922a8"} Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.193109 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.197815 4853 generic.go:334] "Generic (PLEG): container finished" podID="2931d256-61bc-4a2d-945e-95828ba68e62" containerID="e63625777ba013b969f53ed93d2854dcdaa1be857780e9a06154c646cc4cc96a" exitCode=0 Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.197887 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-694c4fff66-rx8bf" event={"ID":"2931d256-61bc-4a2d-945e-95828ba68e62","Type":"ContainerDied","Data":"e63625777ba013b969f53ed93d2854dcdaa1be857780e9a06154c646cc4cc96a"} Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.197916 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-694c4fff66-rx8bf" event={"ID":"2931d256-61bc-4a2d-945e-95828ba68e62","Type":"ContainerDied","Data":"477b7779b3725306885cf89a1ad7dffefd82c1861fc3c431214343eb08e247d5"} Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.197929 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477b7779b3725306885cf89a1ad7dffefd82c1861fc3c431214343eb08e247d5" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.200274 4853 scope.go:117] "RemoveContainer" containerID="a94e6308fded3515eea03b5ce99878e35de576ea5be910bc338157cc29cd553a" Dec 09 17:22:06 crc kubenswrapper[4853]: E1209 17:22:06.200552 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-76d69ccdc-dctkd_openstack(c583189f-34ec-4aea-a4a3-ce2600a3c07d)\"" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.220673 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b4ff486d-bdppg" event={"ID":"33db0244-869a-4927-87c3-092a2aae9d4a","Type":"ContainerStarted","Data":"b0f2746d2846c5564f20b7d156477c539be28972b7f379e856c05fdb5b497dce"} Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.221823 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.229133 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.239908 4853 generic.go:334] "Generic (PLEG): container finished" podID="8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" containerID="112ba17d65cce5b1f8e8e55ad0bf9610cf345899cf3e9f10ceaf5698c86a2f65" exitCode=0 Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.240019 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" event={"ID":"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f","Type":"ContainerDied","Data":"112ba17d65cce5b1f8e8e55ad0bf9610cf345899cf3e9f10ceaf5698c86a2f65"} Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.250748 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerStarted","Data":"2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6"} Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.330498 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-fb7c95bc-v962c" podStartSLOduration=3.330476828 podStartE2EDuration="3.330476828s" podCreationTimestamp="2025-12-09 17:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:22:06.281837047 +0000 UTC m=+1553.216576229" watchObservedRunningTime="2025-12-09 17:22:06.330476828 +0000 UTC m=+1553.265216010" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.353494 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6b4ff486d-bdppg" podStartSLOduration=2.353473482 podStartE2EDuration="2.353473482s" podCreationTimestamp="2025-12-09 17:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:22:06.327359511 +0000 UTC m=+1553.262098703" watchObservedRunningTime="2025-12-09 17:22:06.353473482 +0000 UTC m=+1553.288212664" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.393438 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.420355 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data-custom\") pod \"2931d256-61bc-4a2d-945e-95828ba68e62\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.420687 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-combined-ca-bundle\") pod \"2931d256-61bc-4a2d-945e-95828ba68e62\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.420718 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data\") pod \"2931d256-61bc-4a2d-945e-95828ba68e62\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.420748 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbk5p\" (UniqueName: \"kubernetes.io/projected/2931d256-61bc-4a2d-945e-95828ba68e62-kube-api-access-vbk5p\") pod \"2931d256-61bc-4a2d-945e-95828ba68e62\" (UID: \"2931d256-61bc-4a2d-945e-95828ba68e62\") " Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.428888 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2931d256-61bc-4a2d-945e-95828ba68e62-kube-api-access-vbk5p" (OuterVolumeSpecName: "kube-api-access-vbk5p") pod "2931d256-61bc-4a2d-945e-95828ba68e62" (UID: "2931d256-61bc-4a2d-945e-95828ba68e62"). InnerVolumeSpecName "kube-api-access-vbk5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.435683 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2931d256-61bc-4a2d-945e-95828ba68e62" (UID: "2931d256-61bc-4a2d-945e-95828ba68e62"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.470925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2931d256-61bc-4a2d-945e-95828ba68e62" (UID: "2931d256-61bc-4a2d-945e-95828ba68e62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.510720 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data" (OuterVolumeSpecName: "config-data") pod "2931d256-61bc-4a2d-945e-95828ba68e62" (UID: "2931d256-61bc-4a2d-945e-95828ba68e62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.525057 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-combined-ca-bundle\") pod \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.525096 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data\") pod \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.525152 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn9v9\" (UniqueName: \"kubernetes.io/projected/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-kube-api-access-hn9v9\") pod \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.525311 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data-custom\") pod \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\" (UID: \"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f\") " Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.525957 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.525974 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.525983 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbk5p\" (UniqueName: \"kubernetes.io/projected/2931d256-61bc-4a2d-945e-95828ba68e62-kube-api-access-vbk5p\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.525993 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2931d256-61bc-4a2d-945e-95828ba68e62-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.529299 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-kube-api-access-hn9v9" (OuterVolumeSpecName: "kube-api-access-hn9v9") pod "8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" (UID: "8ae5cf0c-3ea0-4347-a9ab-d95e4556157f"). InnerVolumeSpecName "kube-api-access-hn9v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.533866 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" (UID: "8ae5cf0c-3ea0-4347-a9ab-d95e4556157f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.567797 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" (UID: "8ae5cf0c-3ea0-4347-a9ab-d95e4556157f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.603096 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data" (OuterVolumeSpecName: "config-data") pod "8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" (UID: "8ae5cf0c-3ea0-4347-a9ab-d95e4556157f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.628756 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.628806 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.628822 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.628836 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn9v9\" (UniqueName: \"kubernetes.io/projected/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f-kube-api-access-hn9v9\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.645992 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.654700 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.654772 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.659662 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:06 crc kubenswrapper[4853]: I1209 17:22:06.659758 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.264237 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.264257 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c59b45f9f-x6g2g" event={"ID":"8ae5cf0c-3ea0-4347-a9ab-d95e4556157f","Type":"ContainerDied","Data":"4f32f54492df3d0df4bf1f79b530e75027a5f8fd33b1e82d263a29bf2fb56424"} Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.264677 4853 scope.go:117] "RemoveContainer" containerID="112ba17d65cce5b1f8e8e55ad0bf9610cf345899cf3e9f10ceaf5698c86a2f65" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.267483 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerStarted","Data":"e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96"} Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.267503 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-694c4fff66-rx8bf" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.268793 4853 scope.go:117] "RemoveContainer" containerID="a94e6308fded3515eea03b5ce99878e35de576ea5be910bc338157cc29cd553a" Dec 09 17:22:07 crc kubenswrapper[4853]: E1209 17:22:07.268996 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-76d69ccdc-dctkd_openstack(c583189f-34ec-4aea-a4a3-ce2600a3c07d)\"" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.269042 4853 scope.go:117] "RemoveContainer" containerID="cb51dfd75abb5c08a50f14caf01296efa89724a81d991dbf134fb6a2819e0a93" Dec 09 17:22:07 crc kubenswrapper[4853]: E1209 17:22:07.269347 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-85746db47f-n5p9l_openstack(bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67)\"" pod="openstack/heat-api-85746db47f-n5p9l" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.340725 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5c59b45f9f-x6g2g"] Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.376359 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5c59b45f9f-x6g2g"] Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.402238 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-694c4fff66-rx8bf"] Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.419339 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-694c4fff66-rx8bf"] Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.583251 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2931d256-61bc-4a2d-945e-95828ba68e62" path="/var/lib/kubelet/pods/2931d256-61bc-4a2d-945e-95828ba68e62/volumes" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.584112 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" path="/var/lib/kubelet/pods/8ae5cf0c-3ea0-4347-a9ab-d95e4556157f/volumes" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.773793 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mxrk"] Dec 09 17:22:07 crc kubenswrapper[4853]: E1209 17:22:07.774557 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" containerName="heat-cfnapi" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.774654 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" containerName="heat-cfnapi" Dec 09 17:22:07 crc kubenswrapper[4853]: E1209 17:22:07.774737 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2931d256-61bc-4a2d-945e-95828ba68e62" containerName="heat-api" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.774808 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2931d256-61bc-4a2d-945e-95828ba68e62" containerName="heat-api" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.775074 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2931d256-61bc-4a2d-945e-95828ba68e62" containerName="heat-api" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.775184 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae5cf0c-3ea0-4347-a9ab-d95e4556157f" containerName="heat-cfnapi" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.776033 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.779380 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lzxnd" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.779739 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.786959 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.805905 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mxrk"] Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.874086 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.874139 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.962951 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6trm\" (UniqueName: \"kubernetes.io/projected/fb169fd1-98da-414e-9487-67a58e01f0a6-kube-api-access-t6trm\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.963122 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-config-data\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.963146 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:07 crc kubenswrapper[4853]: I1209 17:22:07.963177 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-scripts\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.027233 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.033068 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.065765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-config-data\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.066145 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.067191 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-scripts\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.067633 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6trm\" (UniqueName: \"kubernetes.io/projected/fb169fd1-98da-414e-9487-67a58e01f0a6-kube-api-access-t6trm\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.074971 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.084592 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-scripts\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.084696 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-config-data\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.088419 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6trm\" (UniqueName: \"kubernetes.io/projected/fb169fd1-98da-414e-9487-67a58e01f0a6-kube-api-access-t6trm\") pod \"nova-cell0-conductor-db-sync-2mxrk\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.170458 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.284387 4853 scope.go:117] "RemoveContainer" containerID="cb51dfd75abb5c08a50f14caf01296efa89724a81d991dbf134fb6a2819e0a93" Dec 09 17:22:08 crc kubenswrapper[4853]: E1209 17:22:08.284691 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-85746db47f-n5p9l_openstack(bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67)\"" pod="openstack/heat-api-85746db47f-n5p9l" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.285299 4853 scope.go:117] "RemoveContainer" containerID="a94e6308fded3515eea03b5ce99878e35de576ea5be910bc338157cc29cd553a" Dec 09 17:22:08 crc kubenswrapper[4853]: E1209 17:22:08.285481 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-76d69ccdc-dctkd_openstack(c583189f-34ec-4aea-a4a3-ce2600a3c07d)\"" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.293189 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.293260 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.810810 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.810856 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.877381 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 17:22:08 crc kubenswrapper[4853]: I1209 17:22:08.902528 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 17:22:09 crc kubenswrapper[4853]: I1209 17:22:09.016725 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:22:09 crc kubenswrapper[4853]: I1209 17:22:09.100118 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hh78g"] Dec 09 17:22:09 crc kubenswrapper[4853]: I1209 17:22:09.100793 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" podUID="72ebed7f-14e0-4b36-bf40-f66e71e044b6" containerName="dnsmasq-dns" containerID="cri-o://afb53e07ac2f4a761b6337eafcc668a4df5dfbebfc5914b44f23a4f9c2400359" gracePeriod=10 Dec 09 17:22:09 crc kubenswrapper[4853]: I1209 17:22:09.335032 4853 generic.go:334] "Generic (PLEG): container finished" podID="72ebed7f-14e0-4b36-bf40-f66e71e044b6" containerID="afb53e07ac2f4a761b6337eafcc668a4df5dfbebfc5914b44f23a4f9c2400359" exitCode=0 Dec 09 17:22:09 crc kubenswrapper[4853]: I1209 17:22:09.335121 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" event={"ID":"72ebed7f-14e0-4b36-bf40-f66e71e044b6","Type":"ContainerDied","Data":"afb53e07ac2f4a761b6337eafcc668a4df5dfbebfc5914b44f23a4f9c2400359"} Dec 09 17:22:09 crc kubenswrapper[4853]: I1209 17:22:09.339018 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerStarted","Data":"3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155"} Dec 09 17:22:09 crc kubenswrapper[4853]: I1209 17:22:09.339854 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 17:22:09 crc kubenswrapper[4853]: I1209 17:22:09.339895 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 17:22:09 crc kubenswrapper[4853]: I1209 17:22:09.462614 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mxrk"] Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.195322 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.349891 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-sb\") pod \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.349960 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-swift-storage-0\") pod \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.350013 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-config\") pod \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.350070 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k6lh\" (UniqueName: \"kubernetes.io/projected/72ebed7f-14e0-4b36-bf40-f66e71e044b6-kube-api-access-4k6lh\") pod \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.350144 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-nb\") pod \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.350231 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-svc\") pod \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\" (UID: \"72ebed7f-14e0-4b36-bf40-f66e71e044b6\") " Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.359191 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2mxrk" event={"ID":"fb169fd1-98da-414e-9487-67a58e01f0a6","Type":"ContainerStarted","Data":"4aa362c485ecc073ec18c470f9ae557e2067be1c68aa3b2b339227c38d28f727"} Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.361493 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ebed7f-14e0-4b36-bf40-f66e71e044b6-kube-api-access-4k6lh" (OuterVolumeSpecName: "kube-api-access-4k6lh") pod "72ebed7f-14e0-4b36-bf40-f66e71e044b6" (UID: "72ebed7f-14e0-4b36-bf40-f66e71e044b6"). InnerVolumeSpecName "kube-api-access-4k6lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.367986 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.368906 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hh78g" event={"ID":"72ebed7f-14e0-4b36-bf40-f66e71e044b6","Type":"ContainerDied","Data":"14fe3409df95b5604d8e320c61a8d378acb48b9e8626a7d6b268e2b77df76c6b"} Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.368954 4853 scope.go:117] "RemoveContainer" containerID="afb53e07ac2f4a761b6337eafcc668a4df5dfbebfc5914b44f23a4f9c2400359" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.369085 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.369098 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.457173 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k6lh\" (UniqueName: \"kubernetes.io/projected/72ebed7f-14e0-4b36-bf40-f66e71e044b6-kube-api-access-4k6lh\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.502245 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72ebed7f-14e0-4b36-bf40-f66e71e044b6" (UID: "72ebed7f-14e0-4b36-bf40-f66e71e044b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.518384 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-config" (OuterVolumeSpecName: "config") pod "72ebed7f-14e0-4b36-bf40-f66e71e044b6" (UID: "72ebed7f-14e0-4b36-bf40-f66e71e044b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.531384 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "72ebed7f-14e0-4b36-bf40-f66e71e044b6" (UID: "72ebed7f-14e0-4b36-bf40-f66e71e044b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.532210 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "72ebed7f-14e0-4b36-bf40-f66e71e044b6" (UID: "72ebed7f-14e0-4b36-bf40-f66e71e044b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.581845 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "72ebed7f-14e0-4b36-bf40-f66e71e044b6" (UID: "72ebed7f-14e0-4b36-bf40-f66e71e044b6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.587503 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.587539 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.587550 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.587558 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.587569 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72ebed7f-14e0-4b36-bf40-f66e71e044b6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.679766 4853 scope.go:117] "RemoveContainer" containerID="da924114d938d77800a9374479a024bc65bd3441e827ed8337b3aa639aa07430" Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.746679 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hh78g"] Dec 09 17:22:10 crc kubenswrapper[4853]: I1209 17:22:10.758451 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hh78g"] Dec 09 17:22:11 crc kubenswrapper[4853]: I1209 17:22:11.387158 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerStarted","Data":"e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef"} Dec 09 17:22:11 crc kubenswrapper[4853]: I1209 17:22:11.387636 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="ceilometer-central-agent" containerID="cri-o://2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6" gracePeriod=30 Dec 09 17:22:11 crc kubenswrapper[4853]: I1209 17:22:11.387925 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:22:11 crc kubenswrapper[4853]: I1209 17:22:11.388360 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="proxy-httpd" containerID="cri-o://e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef" gracePeriod=30 Dec 09 17:22:11 crc kubenswrapper[4853]: I1209 17:22:11.388413 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="sg-core" containerID="cri-o://3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155" gracePeriod=30 Dec 09 17:22:11 crc kubenswrapper[4853]: I1209 17:22:11.388473 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="ceilometer-notification-agent" containerID="cri-o://e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96" gracePeriod=30 Dec 09 17:22:11 crc kubenswrapper[4853]: I1209 17:22:11.584130 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72ebed7f-14e0-4b36-bf40-f66e71e044b6" path="/var/lib/kubelet/pods/72ebed7f-14e0-4b36-bf40-f66e71e044b6/volumes" Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.334712 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.334860 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.350977 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.351174 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.354103 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.365876 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.632029232 podStartE2EDuration="10.365856805s" podCreationTimestamp="2025-12-09 17:22:02 +0000 UTC" firstStartedPulling="2025-12-09 17:22:04.081405839 +0000 UTC m=+1551.016145021" lastFinishedPulling="2025-12-09 17:22:10.815233412 +0000 UTC m=+1557.749972594" observedRunningTime="2025-12-09 17:22:11.472411952 +0000 UTC m=+1558.407151144" watchObservedRunningTime="2025-12-09 17:22:12.365856805 +0000 UTC m=+1559.300595987" Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.470505 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.533424 4853 generic.go:334] "Generic (PLEG): container finished" podID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerID="e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef" exitCode=0 Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.533696 4853 generic.go:334] "Generic (PLEG): container finished" podID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerID="3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155" exitCode=2 Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.533773 4853 generic.go:334] "Generic (PLEG): container finished" podID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerID="e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96" exitCode=0 Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.534883 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerDied","Data":"e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef"} Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.534997 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerDied","Data":"3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155"} Dec 09 17:22:12 crc kubenswrapper[4853]: I1209 17:22:12.535066 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerDied","Data":"e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96"} Dec 09 17:22:14 crc kubenswrapper[4853]: I1209 17:22:14.118331 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:22:16 crc kubenswrapper[4853]: I1209 17:22:16.381015 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-fb7c95bc-v962c" Dec 09 17:22:16 crc kubenswrapper[4853]: I1209 17:22:16.417064 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6b4ff486d-bdppg" Dec 09 17:22:16 crc kubenswrapper[4853]: I1209 17:22:16.524251 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-76d69ccdc-dctkd"] Dec 09 17:22:16 crc kubenswrapper[4853]: I1209 17:22:16.555301 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-85746db47f-n5p9l"] Dec 09 17:22:21 crc kubenswrapper[4853]: I1209 17:22:21.647312 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-8648cdff54-k9lsq" Dec 09 17:22:21 crc kubenswrapper[4853]: I1209 17:22:21.712218 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-84b8897886-8dc2w"] Dec 09 17:22:21 crc kubenswrapper[4853]: I1209 17:22:21.712433 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-84b8897886-8dc2w" podUID="7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" containerName="heat-engine" containerID="cri-o://9035197de58fae44bbd19523d50a1b8ba979ba37389c0e8abdadf8c6a4d93ffd" gracePeriod=60 Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.212741 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.219651 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.319974 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spfpf\" (UniqueName: \"kubernetes.io/projected/c583189f-34ec-4aea-a4a3-ce2600a3c07d-kube-api-access-spfpf\") pod \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.320536 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-combined-ca-bundle\") pod \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.320706 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data-custom\") pod \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.320735 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data\") pod \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\" (UID: \"c583189f-34ec-4aea-a4a3-ce2600a3c07d\") " Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.326676 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c583189f-34ec-4aea-a4a3-ce2600a3c07d" (UID: "c583189f-34ec-4aea-a4a3-ce2600a3c07d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.330059 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c583189f-34ec-4aea-a4a3-ce2600a3c07d-kube-api-access-spfpf" (OuterVolumeSpecName: "kube-api-access-spfpf") pod "c583189f-34ec-4aea-a4a3-ce2600a3c07d" (UID: "c583189f-34ec-4aea-a4a3-ce2600a3c07d"). InnerVolumeSpecName "kube-api-access-spfpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.360080 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c583189f-34ec-4aea-a4a3-ce2600a3c07d" (UID: "c583189f-34ec-4aea-a4a3-ce2600a3c07d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.389781 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data" (OuterVolumeSpecName: "config-data") pod "c583189f-34ec-4aea-a4a3-ce2600a3c07d" (UID: "c583189f-34ec-4aea-a4a3-ce2600a3c07d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.423149 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-combined-ca-bundle\") pod \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.423198 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz5rv\" (UniqueName: \"kubernetes.io/projected/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-kube-api-access-mz5rv\") pod \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.423240 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data\") pod \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.423431 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data-custom\") pod \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\" (UID: \"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67\") " Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.424134 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.424154 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.424166 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c583189f-34ec-4aea-a4a3-ce2600a3c07d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.424177 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spfpf\" (UniqueName: \"kubernetes.io/projected/c583189f-34ec-4aea-a4a3-ce2600a3c07d-kube-api-access-spfpf\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.430669 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" (UID: "bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.442458 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-kube-api-access-mz5rv" (OuterVolumeSpecName: "kube-api-access-mz5rv") pod "bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" (UID: "bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67"). InnerVolumeSpecName "kube-api-access-mz5rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.474550 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" (UID: "bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.489784 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data" (OuterVolumeSpecName: "config-data") pod "bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" (UID: "bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.526979 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.527019 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.527035 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz5rv\" (UniqueName: \"kubernetes.io/projected/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-kube-api-access-mz5rv\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.527049 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.732161 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85746db47f-n5p9l" event={"ID":"bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67","Type":"ContainerDied","Data":"39ddd99e789dab3ba2af760b84e53ede6905d7cf5040dd06eff3521cdae76d69"} Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.732196 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85746db47f-n5p9l" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.738985 4853 scope.go:117] "RemoveContainer" containerID="cb51dfd75abb5c08a50f14caf01296efa89724a81d991dbf134fb6a2819e0a93" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.740587 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" event={"ID":"c583189f-34ec-4aea-a4a3-ce2600a3c07d","Type":"ContainerDied","Data":"5e698c9b38f9e49f0781307e193a2e865bdfe70dd7f2ca5b1d00a254f5a0cb3d"} Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.740704 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76d69ccdc-dctkd" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.789632 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-85746db47f-n5p9l"] Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.803624 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-85746db47f-n5p9l"] Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.805383 4853 scope.go:117] "RemoveContainer" containerID="a94e6308fded3515eea03b5ce99878e35de576ea5be910bc338157cc29cd553a" Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.819541 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-76d69ccdc-dctkd"] Dec 09 17:22:22 crc kubenswrapper[4853]: I1209 17:22:22.840239 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-76d69ccdc-dctkd"] Dec 09 17:22:23 crc kubenswrapper[4853]: I1209 17:22:23.581463 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" path="/var/lib/kubelet/pods/bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67/volumes" Dec 09 17:22:23 crc kubenswrapper[4853]: I1209 17:22:23.582507 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" path="/var/lib/kubelet/pods/c583189f-34ec-4aea-a4a3-ce2600a3c07d/volumes" Dec 09 17:22:23 crc kubenswrapper[4853]: I1209 17:22:23.754699 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2mxrk" event={"ID":"fb169fd1-98da-414e-9487-67a58e01f0a6","Type":"ContainerStarted","Data":"dc6df680aecde7d4948282ed3194f5671494ec2d190ca83a33ab3135a99e7816"} Dec 09 17:22:23 crc kubenswrapper[4853]: I1209 17:22:23.785175 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2mxrk" podStartSLOduration=3.448383716 podStartE2EDuration="16.785154728s" podCreationTimestamp="2025-12-09 17:22:07 +0000 UTC" firstStartedPulling="2025-12-09 17:22:09.476537419 +0000 UTC m=+1556.411276601" lastFinishedPulling="2025-12-09 17:22:22.813308431 +0000 UTC m=+1569.748047613" observedRunningTime="2025-12-09 17:22:23.782225415 +0000 UTC m=+1570.716964607" watchObservedRunningTime="2025-12-09 17:22:23.785154728 +0000 UTC m=+1570.719893920" Dec 09 17:22:23 crc kubenswrapper[4853]: E1209 17:22:23.865957 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9035197de58fae44bbd19523d50a1b8ba979ba37389c0e8abdadf8c6a4d93ffd" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 09 17:22:23 crc kubenswrapper[4853]: E1209 17:22:23.872005 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9035197de58fae44bbd19523d50a1b8ba979ba37389c0e8abdadf8c6a4d93ffd" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 09 17:22:23 crc kubenswrapper[4853]: E1209 17:22:23.887921 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9035197de58fae44bbd19523d50a1b8ba979ba37389c0e8abdadf8c6a4d93ffd" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 09 17:22:23 crc kubenswrapper[4853]: E1209 17:22:23.888485 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-84b8897886-8dc2w" podUID="7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" containerName="heat-engine" Dec 09 17:22:26 crc kubenswrapper[4853]: E1209 17:22:26.335822 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf3ebbb_a614_4447_a448_f6dbf2ab5173.slice/crio-2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf3ebbb_a614_4447_a448_f6dbf2ab5173.slice/crio-conmon-2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6.scope\": RecentStats: unable to find data in memory cache]" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.672385 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.788306 4853 generic.go:334] "Generic (PLEG): container finished" podID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerID="2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6" exitCode=0 Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.788362 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerDied","Data":"2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6"} Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.788381 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.788403 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cf3ebbb-a614-4447-a448-f6dbf2ab5173","Type":"ContainerDied","Data":"cc0fc8d85a000510a51b9164662c3dbb8004a11749610e5a53211928f532f3e6"} Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.788427 4853 scope.go:117] "RemoveContainer" containerID="e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.814418 4853 scope.go:117] "RemoveContainer" containerID="3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.827366 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nnhr\" (UniqueName: \"kubernetes.io/projected/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-kube-api-access-6nnhr\") pod \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.827435 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-sg-core-conf-yaml\") pod \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.827507 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-scripts\") pod \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.827575 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-combined-ca-bundle\") pod \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.827616 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-config-data\") pod \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.827646 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-log-httpd\") pod \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.827762 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-run-httpd\") pod \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\" (UID: \"5cf3ebbb-a614-4447-a448-f6dbf2ab5173\") " Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.828883 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5cf3ebbb-a614-4447-a448-f6dbf2ab5173" (UID: "5cf3ebbb-a614-4447-a448-f6dbf2ab5173"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.830290 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5cf3ebbb-a614-4447-a448-f6dbf2ab5173" (UID: "5cf3ebbb-a614-4447-a448-f6dbf2ab5173"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.842850 4853 scope.go:117] "RemoveContainer" containerID="e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.848049 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-scripts" (OuterVolumeSpecName: "scripts") pod "5cf3ebbb-a614-4447-a448-f6dbf2ab5173" (UID: "5cf3ebbb-a614-4447-a448-f6dbf2ab5173"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.851624 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-kube-api-access-6nnhr" (OuterVolumeSpecName: "kube-api-access-6nnhr") pod "5cf3ebbb-a614-4447-a448-f6dbf2ab5173" (UID: "5cf3ebbb-a614-4447-a448-f6dbf2ab5173"). InnerVolumeSpecName "kube-api-access-6nnhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.876723 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5cf3ebbb-a614-4447-a448-f6dbf2ab5173" (UID: "5cf3ebbb-a614-4447-a448-f6dbf2ab5173"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.933560 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nnhr\" (UniqueName: \"kubernetes.io/projected/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-kube-api-access-6nnhr\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.933616 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.933632 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.933644 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.933654 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.941489 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cf3ebbb-a614-4447-a448-f6dbf2ab5173" (UID: "5cf3ebbb-a614-4447-a448-f6dbf2ab5173"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.968582 4853 scope.go:117] "RemoveContainer" containerID="2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.994805 4853 scope.go:117] "RemoveContainer" containerID="e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef" Dec 09 17:22:26 crc kubenswrapper[4853]: E1209 17:22:26.995302 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef\": container with ID starting with e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef not found: ID does not exist" containerID="e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.995347 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef"} err="failed to get container status \"e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef\": rpc error: code = NotFound desc = could not find container \"e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef\": container with ID starting with e25fdcc5e9b04d10ecc2e157ceac20ddd1279623fd18de7fcf2172ea689e9eef not found: ID does not exist" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.995376 4853 scope.go:117] "RemoveContainer" containerID="3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155" Dec 09 17:22:26 crc kubenswrapper[4853]: E1209 17:22:26.995635 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155\": container with ID starting with 3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155 not found: ID does not exist" containerID="3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.995653 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155"} err="failed to get container status \"3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155\": rpc error: code = NotFound desc = could not find container \"3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155\": container with ID starting with 3bc6484c0fe851076d60fd37057c701698933d5c1af101a2602c46efd8364155 not found: ID does not exist" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.995667 4853 scope.go:117] "RemoveContainer" containerID="e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96" Dec 09 17:22:26 crc kubenswrapper[4853]: E1209 17:22:26.995835 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96\": container with ID starting with e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96 not found: ID does not exist" containerID="e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.995850 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96"} err="failed to get container status \"e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96\": rpc error: code = NotFound desc = could not find container \"e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96\": container with ID starting with e464801ba16c50e7c93c171cceb51f7de3aeaf222111b74dfd79f2abb7d21e96 not found: ID does not exist" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.995863 4853 scope.go:117] "RemoveContainer" containerID="2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6" Dec 09 17:22:26 crc kubenswrapper[4853]: E1209 17:22:26.996027 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6\": container with ID starting with 2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6 not found: ID does not exist" containerID="2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6" Dec 09 17:22:26 crc kubenswrapper[4853]: I1209 17:22:26.996046 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6"} err="failed to get container status \"2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6\": rpc error: code = NotFound desc = could not find container \"2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6\": container with ID starting with 2f3a6952dac7ea9722663405862b16f3689bee1ed82812bcc43c555e3f3f64b6 not found: ID does not exist" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.021783 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-config-data" (OuterVolumeSpecName: "config-data") pod "5cf3ebbb-a614-4447-a448-f6dbf2ab5173" (UID: "5cf3ebbb-a614-4447-a448-f6dbf2ab5173"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.035875 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.035911 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf3ebbb-a614-4447-a448-f6dbf2ab5173-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.124813 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.136084 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.157423 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.157922 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ebed7f-14e0-4b36-bf40-f66e71e044b6" containerName="init" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.157940 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ebed7f-14e0-4b36-bf40-f66e71e044b6" containerName="init" Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.157953 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="proxy-httpd" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.157959 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="proxy-httpd" Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.157973 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="ceilometer-central-agent" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.157980 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="ceilometer-central-agent" Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.157993 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="ceilometer-notification-agent" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.157999 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="ceilometer-notification-agent" Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.158009 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" containerName="heat-cfnapi" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158014 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" containerName="heat-cfnapi" Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.158036 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" containerName="heat-api" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158041 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" containerName="heat-api" Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.158058 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ebed7f-14e0-4b36-bf40-f66e71e044b6" containerName="dnsmasq-dns" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158063 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ebed7f-14e0-4b36-bf40-f66e71e044b6" containerName="dnsmasq-dns" Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.158078 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" containerName="heat-api" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158084 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" containerName="heat-api" Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.158104 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="sg-core" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158110 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="sg-core" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158317 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="ceilometer-central-agent" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158332 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ebed7f-14e0-4b36-bf40-f66e71e044b6" containerName="dnsmasq-dns" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158338 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" containerName="heat-cfnapi" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158349 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="ceilometer-notification-agent" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158360 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="sg-core" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158369 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" containerName="heat-api" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158379 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0dc9d8-d30e-42b3-83d7-cfc1dd312d67" containerName="heat-api" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158389 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" containerName="proxy-httpd" Dec 09 17:22:27 crc kubenswrapper[4853]: E1209 17:22:27.158636 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" containerName="heat-cfnapi" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158651 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" containerName="heat-cfnapi" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.158997 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c583189f-34ec-4aea-a4a3-ce2600a3c07d" containerName="heat-cfnapi" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.160696 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.162581 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.162866 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.176939 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.341189 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-scripts\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.341503 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.341532 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-log-httpd\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.341552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-config-data\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.341643 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vbvj\" (UniqueName: \"kubernetes.io/projected/950d43a6-fa3d-4040-bb01-ecf255776a70-kube-api-access-4vbvj\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.341692 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.341719 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-run-httpd\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.443479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.443526 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-log-httpd\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.443551 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-config-data\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.443652 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vbvj\" (UniqueName: \"kubernetes.io/projected/950d43a6-fa3d-4040-bb01-ecf255776a70-kube-api-access-4vbvj\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.443702 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.443728 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-run-httpd\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.443766 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-scripts\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.445557 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-run-httpd\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.445832 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-log-httpd\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.457403 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.457583 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-config-data\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.457586 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.457844 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-scripts\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.472206 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vbvj\" (UniqueName: \"kubernetes.io/projected/950d43a6-fa3d-4040-bb01-ecf255776a70-kube-api-access-4vbvj\") pod \"ceilometer-0\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.552045 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:27 crc kubenswrapper[4853]: I1209 17:22:27.632410 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf3ebbb-a614-4447-a448-f6dbf2ab5173" path="/var/lib/kubelet/pods/5cf3ebbb-a614-4447-a448-f6dbf2ab5173/volumes" Dec 09 17:22:28 crc kubenswrapper[4853]: I1209 17:22:28.016644 4853 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf263ce7c-227e-4025-af32-6bf4176920f7"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf263ce7c-227e-4025-af32-6bf4176920f7] : Timed out while waiting for systemd to remove kubepods-besteffort-podf263ce7c_227e_4025_af32_6bf4176920f7.slice" Dec 09 17:22:28 crc kubenswrapper[4853]: E1209 17:22:28.016976 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podf263ce7c-227e-4025-af32-6bf4176920f7] : unable to destroy cgroup paths for cgroup [kubepods besteffort podf263ce7c-227e-4025-af32-6bf4176920f7] : Timed out while waiting for systemd to remove kubepods-besteffort-podf263ce7c_227e_4025_af32_6bf4176920f7.slice" pod="openstack/nova-cell0-db-create-9g9jw" podUID="f263ce7c-227e-4025-af32-6bf4176920f7" Dec 09 17:22:28 crc kubenswrapper[4853]: I1209 17:22:28.018958 4853 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod24b35a27-0dc5-4c08-9505-f81db9987470"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod24b35a27-0dc5-4c08-9505-f81db9987470] : Timed out while waiting for systemd to remove kubepods-besteffort-pod24b35a27_0dc5_4c08_9505_f81db9987470.slice" Dec 09 17:22:28 crc kubenswrapper[4853]: E1209 17:22:28.019018 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod24b35a27-0dc5-4c08-9505-f81db9987470] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod24b35a27-0dc5-4c08-9505-f81db9987470] : Timed out while waiting for systemd to remove kubepods-besteffort-pod24b35a27_0dc5_4c08_9505_f81db9987470.slice" pod="openstack/nova-api-db-create-v4hn8" podUID="24b35a27-0dc5-4c08-9505-f81db9987470" Dec 09 17:22:28 crc kubenswrapper[4853]: I1209 17:22:28.302481 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:28 crc kubenswrapper[4853]: I1209 17:22:28.844038 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9g9jw" Dec 09 17:22:28 crc kubenswrapper[4853]: I1209 17:22:28.845454 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerStarted","Data":"268489a78cdfff878e12bf5f3901ecb1503c53cc24f71ba5f510e7e8bf028cb1"} Dec 09 17:22:28 crc kubenswrapper[4853]: I1209 17:22:28.845538 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v4hn8" Dec 09 17:22:29 crc kubenswrapper[4853]: I1209 17:22:29.948257 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerStarted","Data":"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9"} Dec 09 17:22:30 crc kubenswrapper[4853]: I1209 17:22:30.967742 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerStarted","Data":"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0"} Dec 09 17:22:31 crc kubenswrapper[4853]: I1209 17:22:31.980369 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerStarted","Data":"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138"} Dec 09 17:22:32 crc kubenswrapper[4853]: I1209 17:22:32.996417 4853 generic.go:334] "Generic (PLEG): container finished" podID="7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" containerID="9035197de58fae44bbd19523d50a1b8ba979ba37389c0e8abdadf8c6a4d93ffd" exitCode=0 Dec 09 17:22:32 crc kubenswrapper[4853]: I1209 17:22:32.996479 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84b8897886-8dc2w" event={"ID":"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b","Type":"ContainerDied","Data":"9035197de58fae44bbd19523d50a1b8ba979ba37389c0e8abdadf8c6a4d93ffd"} Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.145782 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.255155 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data-custom\") pod \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.255372 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-combined-ca-bundle\") pod \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.255420 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data\") pod \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.255846 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdbt8\" (UniqueName: \"kubernetes.io/projected/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-kube-api-access-xdbt8\") pod \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\" (UID: \"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b\") " Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.262397 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" (UID: "7cc5b5ca-8ff6-4122-b915-3d35d6867b5b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.264478 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-kube-api-access-xdbt8" (OuterVolumeSpecName: "kube-api-access-xdbt8") pod "7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" (UID: "7cc5b5ca-8ff6-4122-b915-3d35d6867b5b"). InnerVolumeSpecName "kube-api-access-xdbt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.300664 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" (UID: "7cc5b5ca-8ff6-4122-b915-3d35d6867b5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.333170 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data" (OuterVolumeSpecName: "config-data") pod "7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" (UID: "7cc5b5ca-8ff6-4122-b915-3d35d6867b5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.359687 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdbt8\" (UniqueName: \"kubernetes.io/projected/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-kube-api-access-xdbt8\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.359732 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.359768 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:33 crc kubenswrapper[4853]: I1209 17:22:33.359780 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:34 crc kubenswrapper[4853]: I1209 17:22:34.009395 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-84b8897886-8dc2w" event={"ID":"7cc5b5ca-8ff6-4122-b915-3d35d6867b5b","Type":"ContainerDied","Data":"3be7e4855d79dfafec179b24f9ce42049ff897a4fa7c148e18761f428cfc4a38"} Dec 09 17:22:34 crc kubenswrapper[4853]: I1209 17:22:34.009447 4853 scope.go:117] "RemoveContainer" containerID="9035197de58fae44bbd19523d50a1b8ba979ba37389c0e8abdadf8c6a4d93ffd" Dec 09 17:22:34 crc kubenswrapper[4853]: I1209 17:22:34.010802 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-84b8897886-8dc2w" Dec 09 17:22:34 crc kubenswrapper[4853]: I1209 17:22:34.015569 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerStarted","Data":"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4"} Dec 09 17:22:34 crc kubenswrapper[4853]: I1209 17:22:34.016028 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:22:34 crc kubenswrapper[4853]: I1209 17:22:34.057481 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.380840938 podStartE2EDuration="7.057457441s" podCreationTimestamp="2025-12-09 17:22:27 +0000 UTC" firstStartedPulling="2025-12-09 17:22:28.308477859 +0000 UTC m=+1575.243217041" lastFinishedPulling="2025-12-09 17:22:32.985094352 +0000 UTC m=+1579.919833544" observedRunningTime="2025-12-09 17:22:34.037225335 +0000 UTC m=+1580.971964527" watchObservedRunningTime="2025-12-09 17:22:34.057457441 +0000 UTC m=+1580.992196623" Dec 09 17:22:34 crc kubenswrapper[4853]: I1209 17:22:34.077292 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-84b8897886-8dc2w"] Dec 09 17:22:34 crc kubenswrapper[4853]: I1209 17:22:34.097530 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-84b8897886-8dc2w"] Dec 09 17:22:35 crc kubenswrapper[4853]: I1209 17:22:35.582390 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" path="/var/lib/kubelet/pods/7cc5b5ca-8ff6-4122-b915-3d35d6867b5b/volumes" Dec 09 17:22:36 crc kubenswrapper[4853]: I1209 17:22:36.043565 4853 generic.go:334] "Generic (PLEG): container finished" podID="fb169fd1-98da-414e-9487-67a58e01f0a6" containerID="dc6df680aecde7d4948282ed3194f5671494ec2d190ca83a33ab3135a99e7816" exitCode=0 Dec 09 17:22:36 crc kubenswrapper[4853]: I1209 17:22:36.043653 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2mxrk" event={"ID":"fb169fd1-98da-414e-9487-67a58e01f0a6","Type":"ContainerDied","Data":"dc6df680aecde7d4948282ed3194f5671494ec2d190ca83a33ab3135a99e7816"} Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.592681 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.707230 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6trm\" (UniqueName: \"kubernetes.io/projected/fb169fd1-98da-414e-9487-67a58e01f0a6-kube-api-access-t6trm\") pod \"fb169fd1-98da-414e-9487-67a58e01f0a6\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.707339 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-config-data\") pod \"fb169fd1-98da-414e-9487-67a58e01f0a6\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.707476 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-scripts\") pod \"fb169fd1-98da-414e-9487-67a58e01f0a6\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.707661 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-combined-ca-bundle\") pod \"fb169fd1-98da-414e-9487-67a58e01f0a6\" (UID: \"fb169fd1-98da-414e-9487-67a58e01f0a6\") " Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.716860 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-scripts" (OuterVolumeSpecName: "scripts") pod "fb169fd1-98da-414e-9487-67a58e01f0a6" (UID: "fb169fd1-98da-414e-9487-67a58e01f0a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.749504 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb169fd1-98da-414e-9487-67a58e01f0a6-kube-api-access-t6trm" (OuterVolumeSpecName: "kube-api-access-t6trm") pod "fb169fd1-98da-414e-9487-67a58e01f0a6" (UID: "fb169fd1-98da-414e-9487-67a58e01f0a6"). InnerVolumeSpecName "kube-api-access-t6trm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.768064 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-config-data" (OuterVolumeSpecName: "config-data") pod "fb169fd1-98da-414e-9487-67a58e01f0a6" (UID: "fb169fd1-98da-414e-9487-67a58e01f0a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.795002 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb169fd1-98da-414e-9487-67a58e01f0a6" (UID: "fb169fd1-98da-414e-9487-67a58e01f0a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.810733 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.810765 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6trm\" (UniqueName: \"kubernetes.io/projected/fb169fd1-98da-414e-9487-67a58e01f0a6-kube-api-access-t6trm\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.810780 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:37 crc kubenswrapper[4853]: I1209 17:22:37.810789 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb169fd1-98da-414e-9487-67a58e01f0a6-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.063863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2mxrk" event={"ID":"fb169fd1-98da-414e-9487-67a58e01f0a6","Type":"ContainerDied","Data":"4aa362c485ecc073ec18c470f9ae557e2067be1c68aa3b2b339227c38d28f727"} Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.063900 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2mxrk" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.063910 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa362c485ecc073ec18c470f9ae557e2067be1c68aa3b2b339227c38d28f727" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.190058 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 17:22:38 crc kubenswrapper[4853]: E1209 17:22:38.190777 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" containerName="heat-engine" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.190800 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" containerName="heat-engine" Dec 09 17:22:38 crc kubenswrapper[4853]: E1209 17:22:38.190825 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb169fd1-98da-414e-9487-67a58e01f0a6" containerName="nova-cell0-conductor-db-sync" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.190834 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb169fd1-98da-414e-9487-67a58e01f0a6" containerName="nova-cell0-conductor-db-sync" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.191132 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc5b5ca-8ff6-4122-b915-3d35d6867b5b" containerName="heat-engine" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.191172 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb169fd1-98da-414e-9487-67a58e01f0a6" containerName="nova-cell0-conductor-db-sync" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.192184 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.197328 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lzxnd" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.210710 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.221059 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.322927 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e8c860-e6af-4b73-a1d4-764a2bc42f39-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b6e8c860-e6af-4b73-a1d4-764a2bc42f39\") " pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.323098 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e8c860-e6af-4b73-a1d4-764a2bc42f39-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b6e8c860-e6af-4b73-a1d4-764a2bc42f39\") " pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.323190 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqj2k\" (UniqueName: \"kubernetes.io/projected/b6e8c860-e6af-4b73-a1d4-764a2bc42f39-kube-api-access-rqj2k\") pod \"nova-cell0-conductor-0\" (UID: \"b6e8c860-e6af-4b73-a1d4-764a2bc42f39\") " pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.425413 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqj2k\" (UniqueName: \"kubernetes.io/projected/b6e8c860-e6af-4b73-a1d4-764a2bc42f39-kube-api-access-rqj2k\") pod \"nova-cell0-conductor-0\" (UID: \"b6e8c860-e6af-4b73-a1d4-764a2bc42f39\") " pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.425507 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e8c860-e6af-4b73-a1d4-764a2bc42f39-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b6e8c860-e6af-4b73-a1d4-764a2bc42f39\") " pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.425666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e8c860-e6af-4b73-a1d4-764a2bc42f39-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b6e8c860-e6af-4b73-a1d4-764a2bc42f39\") " pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.429937 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e8c860-e6af-4b73-a1d4-764a2bc42f39-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b6e8c860-e6af-4b73-a1d4-764a2bc42f39\") " pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.435762 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e8c860-e6af-4b73-a1d4-764a2bc42f39-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b6e8c860-e6af-4b73-a1d4-764a2bc42f39\") " pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.484439 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqj2k\" (UniqueName: \"kubernetes.io/projected/b6e8c860-e6af-4b73-a1d4-764a2bc42f39-kube-api-access-rqj2k\") pod \"nova-cell0-conductor-0\" (UID: \"b6e8c860-e6af-4b73-a1d4-764a2bc42f39\") " pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.528582 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.897993 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.898487 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="ceilometer-central-agent" containerID="cri-o://345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9" gracePeriod=30 Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.898947 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="proxy-httpd" containerID="cri-o://d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4" gracePeriod=30 Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.898986 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="sg-core" containerID="cri-o://08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138" gracePeriod=30 Dec 09 17:22:38 crc kubenswrapper[4853]: I1209 17:22:38.899017 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="ceilometer-notification-agent" containerID="cri-o://c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0" gracePeriod=30 Dec 09 17:22:39 crc kubenswrapper[4853]: W1209 17:22:39.058355 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6e8c860_e6af_4b73_a1d4_764a2bc42f39.slice/crio-85eb55a90a504c5f0b2cc69f66d05b4d4fa7f703c69ed33d80b94a0935c0be1d WatchSource:0}: Error finding container 85eb55a90a504c5f0b2cc69f66d05b4d4fa7f703c69ed33d80b94a0935c0be1d: Status 404 returned error can't find the container with id 85eb55a90a504c5f0b2cc69f66d05b4d4fa7f703c69ed33d80b94a0935c0be1d Dec 09 17:22:39 crc kubenswrapper[4853]: I1209 17:22:39.073499 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 17:22:39 crc kubenswrapper[4853]: I1209 17:22:39.106170 4853 generic.go:334] "Generic (PLEG): container finished" podID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerID="08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138" exitCode=2 Dec 09 17:22:39 crc kubenswrapper[4853]: I1209 17:22:39.106344 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerDied","Data":"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138"} Dec 09 17:22:39 crc kubenswrapper[4853]: I1209 17:22:39.110940 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b6e8c860-e6af-4b73-a1d4-764a2bc42f39","Type":"ContainerStarted","Data":"85eb55a90a504c5f0b2cc69f66d05b4d4fa7f703c69ed33d80b94a0935c0be1d"} Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.120552 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.125172 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b6e8c860-e6af-4b73-a1d4-764a2bc42f39","Type":"ContainerStarted","Data":"4b81b1de4dfdb6a7bab60b14bce117dd73e71a291dd0205b4a4fc352140b85c6"} Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.125483 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.129867 4853 generic.go:334] "Generic (PLEG): container finished" podID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerID="d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4" exitCode=0 Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.129898 4853 generic.go:334] "Generic (PLEG): container finished" podID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerID="c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0" exitCode=0 Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.129906 4853 generic.go:334] "Generic (PLEG): container finished" podID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerID="345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9" exitCode=0 Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.129917 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerDied","Data":"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4"} Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.129969 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerDied","Data":"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0"} Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.129974 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.130000 4853 scope.go:117] "RemoveContainer" containerID="d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.129987 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerDied","Data":"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9"} Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.130243 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950d43a6-fa3d-4040-bb01-ecf255776a70","Type":"ContainerDied","Data":"268489a78cdfff878e12bf5f3901ecb1503c53cc24f71ba5f510e7e8bf028cb1"} Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.168225 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-sg-core-conf-yaml\") pod \"950d43a6-fa3d-4040-bb01-ecf255776a70\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.168290 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vbvj\" (UniqueName: \"kubernetes.io/projected/950d43a6-fa3d-4040-bb01-ecf255776a70-kube-api-access-4vbvj\") pod \"950d43a6-fa3d-4040-bb01-ecf255776a70\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.168319 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-combined-ca-bundle\") pod \"950d43a6-fa3d-4040-bb01-ecf255776a70\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.168373 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-config-data\") pod \"950d43a6-fa3d-4040-bb01-ecf255776a70\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.168464 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-run-httpd\") pod \"950d43a6-fa3d-4040-bb01-ecf255776a70\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.168492 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-log-httpd\") pod \"950d43a6-fa3d-4040-bb01-ecf255776a70\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.168661 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-scripts\") pod \"950d43a6-fa3d-4040-bb01-ecf255776a70\" (UID: \"950d43a6-fa3d-4040-bb01-ecf255776a70\") " Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.169878 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "950d43a6-fa3d-4040-bb01-ecf255776a70" (UID: "950d43a6-fa3d-4040-bb01-ecf255776a70"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.170101 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "950d43a6-fa3d-4040-bb01-ecf255776a70" (UID: "950d43a6-fa3d-4040-bb01-ecf255776a70"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.171630 4853 scope.go:117] "RemoveContainer" containerID="08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.175984 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-scripts" (OuterVolumeSpecName: "scripts") pod "950d43a6-fa3d-4040-bb01-ecf255776a70" (UID: "950d43a6-fa3d-4040-bb01-ecf255776a70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.187947 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950d43a6-fa3d-4040-bb01-ecf255776a70-kube-api-access-4vbvj" (OuterVolumeSpecName: "kube-api-access-4vbvj") pod "950d43a6-fa3d-4040-bb01-ecf255776a70" (UID: "950d43a6-fa3d-4040-bb01-ecf255776a70"). InnerVolumeSpecName "kube-api-access-4vbvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.250681 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "950d43a6-fa3d-4040-bb01-ecf255776a70" (UID: "950d43a6-fa3d-4040-bb01-ecf255776a70"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.252380 4853 scope.go:117] "RemoveContainer" containerID="c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.260904 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.260885151 podStartE2EDuration="2.260885151s" podCreationTimestamp="2025-12-09 17:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:22:40.167690673 +0000 UTC m=+1587.102429855" watchObservedRunningTime="2025-12-09 17:22:40.260885151 +0000 UTC m=+1587.195624323" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.282304 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.282353 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vbvj\" (UniqueName: \"kubernetes.io/projected/950d43a6-fa3d-4040-bb01-ecf255776a70-kube-api-access-4vbvj\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.282372 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.282385 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950d43a6-fa3d-4040-bb01-ecf255776a70-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.282403 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.316807 4853 scope.go:117] "RemoveContainer" containerID="345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.340510 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-config-data" (OuterVolumeSpecName: "config-data") pod "950d43a6-fa3d-4040-bb01-ecf255776a70" (UID: "950d43a6-fa3d-4040-bb01-ecf255776a70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.346874 4853 scope.go:117] "RemoveContainer" containerID="d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4" Dec 09 17:22:40 crc kubenswrapper[4853]: E1209 17:22:40.347526 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4\": container with ID starting with d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4 not found: ID does not exist" containerID="d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.347631 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4"} err="failed to get container status \"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4\": rpc error: code = NotFound desc = could not find container \"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4\": container with ID starting with d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.347693 4853 scope.go:117] "RemoveContainer" containerID="08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138" Dec 09 17:22:40 crc kubenswrapper[4853]: E1209 17:22:40.348261 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138\": container with ID starting with 08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138 not found: ID does not exist" containerID="08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.348300 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138"} err="failed to get container status \"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138\": rpc error: code = NotFound desc = could not find container \"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138\": container with ID starting with 08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.348344 4853 scope.go:117] "RemoveContainer" containerID="c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0" Dec 09 17:22:40 crc kubenswrapper[4853]: E1209 17:22:40.348748 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0\": container with ID starting with c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0 not found: ID does not exist" containerID="c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.348798 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0"} err="failed to get container status \"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0\": rpc error: code = NotFound desc = could not find container \"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0\": container with ID starting with c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.348827 4853 scope.go:117] "RemoveContainer" containerID="345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9" Dec 09 17:22:40 crc kubenswrapper[4853]: E1209 17:22:40.349152 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9\": container with ID starting with 345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9 not found: ID does not exist" containerID="345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.349191 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9"} err="failed to get container status \"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9\": rpc error: code = NotFound desc = could not find container \"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9\": container with ID starting with 345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.349308 4853 scope.go:117] "RemoveContainer" containerID="d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.349693 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4"} err="failed to get container status \"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4\": rpc error: code = NotFound desc = could not find container \"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4\": container with ID starting with d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.349751 4853 scope.go:117] "RemoveContainer" containerID="08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.350048 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138"} err="failed to get container status \"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138\": rpc error: code = NotFound desc = could not find container \"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138\": container with ID starting with 08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.350074 4853 scope.go:117] "RemoveContainer" containerID="c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.350341 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0"} err="failed to get container status \"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0\": rpc error: code = NotFound desc = could not find container \"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0\": container with ID starting with c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.350424 4853 scope.go:117] "RemoveContainer" containerID="345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.350719 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9"} err="failed to get container status \"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9\": rpc error: code = NotFound desc = could not find container \"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9\": container with ID starting with 345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.350747 4853 scope.go:117] "RemoveContainer" containerID="d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.352026 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4"} err="failed to get container status \"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4\": rpc error: code = NotFound desc = could not find container \"d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4\": container with ID starting with d73b398959fe849bff4877bb2909d13b4b891fcc0213b327398d656136549fc4 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.352187 4853 scope.go:117] "RemoveContainer" containerID="08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.352648 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138"} err="failed to get container status \"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138\": rpc error: code = NotFound desc = could not find container \"08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138\": container with ID starting with 08bda6530f1da25d660a2034c53d721e34e69ddea776533fb38d088dbce0b138 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.352701 4853 scope.go:117] "RemoveContainer" containerID="c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.353272 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0"} err="failed to get container status \"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0\": rpc error: code = NotFound desc = could not find container \"c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0\": container with ID starting with c627a413385bcbad072aeb2518f55fbff4ac3f5d6d18ddd1e0e62209afbae6e0 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.353318 4853 scope.go:117] "RemoveContainer" containerID="345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.353759 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9"} err="failed to get container status \"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9\": rpc error: code = NotFound desc = could not find container \"345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9\": container with ID starting with 345b838d6a8c311da59552940a91cd33258a4602dc6368c324deb8bd6c8613b9 not found: ID does not exist" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.358709 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "950d43a6-fa3d-4040-bb01-ecf255776a70" (UID: "950d43a6-fa3d-4040-bb01-ecf255776a70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.384383 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.384430 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950d43a6-fa3d-4040-bb01-ecf255776a70-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.537797 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.551553 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.579909 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:40 crc kubenswrapper[4853]: E1209 17:22:40.580516 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="sg-core" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.580542 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="sg-core" Dec 09 17:22:40 crc kubenswrapper[4853]: E1209 17:22:40.580565 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="proxy-httpd" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.580573 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="proxy-httpd" Dec 09 17:22:40 crc kubenswrapper[4853]: E1209 17:22:40.580617 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="ceilometer-central-agent" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.580635 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="ceilometer-central-agent" Dec 09 17:22:40 crc kubenswrapper[4853]: E1209 17:22:40.580659 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="ceilometer-notification-agent" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.580667 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="ceilometer-notification-agent" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.580943 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="ceilometer-central-agent" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.580968 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="proxy-httpd" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.580987 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="ceilometer-notification-agent" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.581003 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" containerName="sg-core" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.583457 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.586388 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.586726 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.589723 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-scripts\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.589796 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-log-httpd\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.589851 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.589913 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzvq\" (UniqueName: \"kubernetes.io/projected/2b93f329-65de-4a21-b711-adbb0ddab604-kube-api-access-6fzvq\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.590198 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-config-data\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.590422 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.590533 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-run-httpd\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.624088 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.692938 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-config-data\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.693032 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.693086 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-run-httpd\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.693129 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-scripts\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.693151 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-log-httpd\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.693191 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.693235 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzvq\" (UniqueName: \"kubernetes.io/projected/2b93f329-65de-4a21-b711-adbb0ddab604-kube-api-access-6fzvq\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.693928 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-run-httpd\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.694240 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-log-httpd\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.698200 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.699247 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-config-data\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.706419 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.708472 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-scripts\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.709380 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzvq\" (UniqueName: \"kubernetes.io/projected/2b93f329-65de-4a21-b711-adbb0ddab604-kube-api-access-6fzvq\") pod \"ceilometer-0\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " pod="openstack/ceilometer-0" Dec 09 17:22:40 crc kubenswrapper[4853]: I1209 17:22:40.905911 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:41 crc kubenswrapper[4853]: I1209 17:22:41.475027 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:41 crc kubenswrapper[4853]: I1209 17:22:41.579715 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950d43a6-fa3d-4040-bb01-ecf255776a70" path="/var/lib/kubelet/pods/950d43a6-fa3d-4040-bb01-ecf255776a70/volumes" Dec 09 17:22:42 crc kubenswrapper[4853]: I1209 17:22:42.154906 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerStarted","Data":"2d41d63e2d3ae2aa7f4c41976d841c5210236534805f5c170e4b87e2d873173a"} Dec 09 17:22:43 crc kubenswrapper[4853]: I1209 17:22:43.166347 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerStarted","Data":"5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d"} Dec 09 17:22:44 crc kubenswrapper[4853]: I1209 17:22:44.198199 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerStarted","Data":"b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659"} Dec 09 17:22:45 crc kubenswrapper[4853]: I1209 17:22:45.215143 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerStarted","Data":"c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020"} Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.228865 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerStarted","Data":"1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb"} Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.229395 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.262025 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.131610182 podStartE2EDuration="6.261989448s" podCreationTimestamp="2025-12-09 17:22:40 +0000 UTC" firstStartedPulling="2025-12-09 17:22:41.474104952 +0000 UTC m=+1588.408844134" lastFinishedPulling="2025-12-09 17:22:45.604484218 +0000 UTC m=+1592.539223400" observedRunningTime="2025-12-09 17:22:46.252366529 +0000 UTC m=+1593.187105711" watchObservedRunningTime="2025-12-09 17:22:46.261989448 +0000 UTC m=+1593.196728630" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.321672 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-9klzn"] Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.330764 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.337471 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/016225c8-a52d-43fd-b0f9-b1df2c77a52b-operator-scripts\") pod \"aodh-db-create-9klzn\" (UID: \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\") " pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.337618 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mdg2\" (UniqueName: \"kubernetes.io/projected/016225c8-a52d-43fd-b0f9-b1df2c77a52b-kube-api-access-8mdg2\") pod \"aodh-db-create-9klzn\" (UID: \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\") " pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.342934 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-9klzn"] Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.429204 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-8e7b-account-create-update-qmpxf"] Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.430762 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.433588 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.439860 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/016225c8-a52d-43fd-b0f9-b1df2c77a52b-operator-scripts\") pod \"aodh-db-create-9klzn\" (UID: \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\") " pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.439942 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvdl\" (UniqueName: \"kubernetes.io/projected/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-kube-api-access-rjvdl\") pod \"aodh-8e7b-account-create-update-qmpxf\" (UID: \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\") " pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.440026 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mdg2\" (UniqueName: \"kubernetes.io/projected/016225c8-a52d-43fd-b0f9-b1df2c77a52b-kube-api-access-8mdg2\") pod \"aodh-db-create-9klzn\" (UID: \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\") " pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.440052 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-operator-scripts\") pod \"aodh-8e7b-account-create-update-qmpxf\" (UID: \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\") " pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.440805 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/016225c8-a52d-43fd-b0f9-b1df2c77a52b-operator-scripts\") pod \"aodh-db-create-9klzn\" (UID: \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\") " pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.444787 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.461627 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-8e7b-account-create-update-qmpxf"] Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.468126 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mdg2\" (UniqueName: \"kubernetes.io/projected/016225c8-a52d-43fd-b0f9-b1df2c77a52b-kube-api-access-8mdg2\") pod \"aodh-db-create-9klzn\" (UID: \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\") " pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.541794 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjvdl\" (UniqueName: \"kubernetes.io/projected/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-kube-api-access-rjvdl\") pod \"aodh-8e7b-account-create-update-qmpxf\" (UID: \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\") " pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.541908 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-operator-scripts\") pod \"aodh-8e7b-account-create-update-qmpxf\" (UID: \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\") " pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.542658 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-operator-scripts\") pod \"aodh-8e7b-account-create-update-qmpxf\" (UID: \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\") " pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.558037 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjvdl\" (UniqueName: \"kubernetes.io/projected/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-kube-api-access-rjvdl\") pod \"aodh-8e7b-account-create-update-qmpxf\" (UID: \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\") " pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.671780 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:46 crc kubenswrapper[4853]: I1209 17:22:46.762776 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:47 crc kubenswrapper[4853]: I1209 17:22:47.372823 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-9klzn"] Dec 09 17:22:47 crc kubenswrapper[4853]: I1209 17:22:47.539024 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-8e7b-account-create-update-qmpxf"] Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.257796 4853 generic.go:334] "Generic (PLEG): container finished" podID="2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878" containerID="ffd0c4af94a5fbda481a7f28c3eb73f548a5d8a1e25fdec199b27d450e87401c" exitCode=0 Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.258384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-8e7b-account-create-update-qmpxf" event={"ID":"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878","Type":"ContainerDied","Data":"ffd0c4af94a5fbda481a7f28c3eb73f548a5d8a1e25fdec199b27d450e87401c"} Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.259353 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-8e7b-account-create-update-qmpxf" event={"ID":"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878","Type":"ContainerStarted","Data":"1d8b31b91ecbecc6aec17261daf8b6f05be4904b89d00da1f54b10761a841eaf"} Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.265818 4853 generic.go:334] "Generic (PLEG): container finished" podID="016225c8-a52d-43fd-b0f9-b1df2c77a52b" containerID="584e4925a30fb324ef0e31c669550cb6caec9ea064608ec5422e24ccda24b9ec" exitCode=0 Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.265896 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9klzn" event={"ID":"016225c8-a52d-43fd-b0f9-b1df2c77a52b","Type":"ContainerDied","Data":"584e4925a30fb324ef0e31c669550cb6caec9ea064608ec5422e24ccda24b9ec"} Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.265954 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9klzn" event={"ID":"016225c8-a52d-43fd-b0f9-b1df2c77a52b","Type":"ContainerStarted","Data":"f37ff45c37488db1928c968e0d3d2472485eeb96a95eda25b4ebce61448981cf"} Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.266073 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="ceilometer-central-agent" containerID="cri-o://5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d" gracePeriod=30 Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.266092 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="proxy-httpd" containerID="cri-o://1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb" gracePeriod=30 Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.266111 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="sg-core" containerID="cri-o://c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020" gracePeriod=30 Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.266138 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="ceilometer-notification-agent" containerID="cri-o://b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659" gracePeriod=30 Dec 09 17:22:48 crc kubenswrapper[4853]: I1209 17:22:48.567725 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.286101 4853 generic.go:334] "Generic (PLEG): container finished" podID="2b93f329-65de-4a21-b711-adbb0ddab604" containerID="1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb" exitCode=0 Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.286483 4853 generic.go:334] "Generic (PLEG): container finished" podID="2b93f329-65de-4a21-b711-adbb0ddab604" containerID="c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020" exitCode=2 Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.286506 4853 generic.go:334] "Generic (PLEG): container finished" podID="2b93f329-65de-4a21-b711-adbb0ddab604" containerID="b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659" exitCode=0 Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.286313 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerDied","Data":"1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb"} Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.286837 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerDied","Data":"c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020"} Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.286862 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerDied","Data":"b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659"} Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.325700 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-brt9g"] Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.327979 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.334075 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.334272 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.341053 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-brt9g"] Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.370851 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.370932 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-scripts\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.370989 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmw8\" (UniqueName: \"kubernetes.io/projected/982d513f-97ab-460e-b032-f639b8ef6ff5-kube-api-access-pfmw8\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.371080 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-config-data\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.474268 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmw8\" (UniqueName: \"kubernetes.io/projected/982d513f-97ab-460e-b032-f639b8ef6ff5-kube-api-access-pfmw8\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.474406 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-config-data\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.474497 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.474572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-scripts\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.481969 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.482761 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-config-data\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.483720 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.488219 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.498995 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.513524 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmw8\" (UniqueName: \"kubernetes.io/projected/982d513f-97ab-460e-b032-f639b8ef6ff5-kube-api-access-pfmw8\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.518102 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-scripts\") pod \"nova-cell0-cell-mapping-brt9g\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.538932 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.681737 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.712521 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-config-data\") pod \"nova-scheduler-0\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.721285 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.722965 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbjm\" (UniqueName: \"kubernetes.io/projected/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-kube-api-access-mhbjm\") pod \"nova-scheduler-0\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.827934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhbjm\" (UniqueName: \"kubernetes.io/projected/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-kube-api-access-mhbjm\") pod \"nova-scheduler-0\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.828160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-config-data\") pod \"nova-scheduler-0\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.828195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.861334 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-config-data\") pod \"nova-scheduler-0\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.875885 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.886135 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.888812 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.890766 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhbjm\" (UniqueName: \"kubernetes.io/projected/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-kube-api-access-mhbjm\") pod \"nova-scheduler-0\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.910096 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.911662 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.922479 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.924537 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.934823 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.942140 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 17:22:49 crc kubenswrapper[4853]: I1209 17:22:49.980663 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.045002 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8bcf006-6ed4-49d8-92a7-66f5fb141491-logs\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.045175 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-config-data\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.045261 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-config-data\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.045312 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfms2\" (UniqueName: \"kubernetes.io/projected/f342c50d-793c-4238-b9e4-de14f3473b4b-kube-api-access-pfms2\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.045336 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.045562 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f342c50d-793c-4238-b9e4-de14f3473b4b-logs\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.045691 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.045802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9cf9\" (UniqueName: \"kubernetes.io/projected/b8bcf006-6ed4-49d8-92a7-66f5fb141491-kube-api-access-t9cf9\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.083706 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.085546 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.091494 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.094861 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.103734 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zwbj9"] Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.107893 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.117244 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zwbj9"] Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148476 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfms2\" (UniqueName: \"kubernetes.io/projected/f342c50d-793c-4238-b9e4-de14f3473b4b-kube-api-access-pfms2\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148525 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148571 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5hcd\" (UniqueName: \"kubernetes.io/projected/dbb040fb-b035-41b8-82c7-b94858c83360-kube-api-access-w5hcd\") pod \"nova-cell1-novncproxy-0\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148588 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148675 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f342c50d-793c-4238-b9e4-de14f3473b4b-logs\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148715 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9cf9\" (UniqueName: \"kubernetes.io/projected/b8bcf006-6ed4-49d8-92a7-66f5fb141491-kube-api-access-t9cf9\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148786 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8bcf006-6ed4-49d8-92a7-66f5fb141491-logs\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148815 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148850 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-config-data\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.148884 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-config-data\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.159801 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f342c50d-793c-4238-b9e4-de14f3473b4b-logs\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.161737 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8bcf006-6ed4-49d8-92a7-66f5fb141491-logs\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.171063 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-config-data\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.177039 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-config-data\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.177528 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.179835 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.194733 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfms2\" (UniqueName: \"kubernetes.io/projected/f342c50d-793c-4238-b9e4-de14f3473b4b-kube-api-access-pfms2\") pod \"nova-metadata-0\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.197115 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9cf9\" (UniqueName: \"kubernetes.io/projected/b8bcf006-6ed4-49d8-92a7-66f5fb141491-kube-api-access-t9cf9\") pod \"nova-api-0\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.205203 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.252176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.252347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-svc\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.252385 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.252424 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-config\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.252478 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5hcd\" (UniqueName: \"kubernetes.io/projected/dbb040fb-b035-41b8-82c7-b94858c83360-kube-api-access-w5hcd\") pod \"nova-cell1-novncproxy-0\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.252498 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.252561 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.252650 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddngf\" (UniqueName: \"kubernetes.io/projected/7086dd56-2ca2-4d9c-ab48-5837b89f117a-kube-api-access-ddngf\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.252692 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.258318 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.263157 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.282134 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5hcd\" (UniqueName: \"kubernetes.io/projected/dbb040fb-b035-41b8-82c7-b94858c83360-kube-api-access-w5hcd\") pod \"nova-cell1-novncproxy-0\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.327830 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-8e7b-account-create-update-qmpxf" event={"ID":"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878","Type":"ContainerDied","Data":"1d8b31b91ecbecc6aec17261daf8b6f05be4904b89d00da1f54b10761a841eaf"} Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.327878 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8b31b91ecbecc6aec17261daf8b6f05be4904b89d00da1f54b10761a841eaf" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.327936 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-8e7b-account-create-update-qmpxf" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.354695 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjvdl\" (UniqueName: \"kubernetes.io/projected/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-kube-api-access-rjvdl\") pod \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\" (UID: \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\") " Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.354796 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-operator-scripts\") pod \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\" (UID: \"2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878\") " Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.355104 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddngf\" (UniqueName: \"kubernetes.io/projected/7086dd56-2ca2-4d9c-ab48-5837b89f117a-kube-api-access-ddngf\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.355134 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.355259 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-svc\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.355286 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.355306 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-config\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.355367 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.356237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.357566 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.363521 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-kube-api-access-rjvdl" (OuterVolumeSpecName: "kube-api-access-rjvdl") pod "2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878" (UID: "2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878"). InnerVolumeSpecName "kube-api-access-rjvdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.364046 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878" (UID: "2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.364995 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.365588 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-svc\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.366269 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-config\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.385856 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddngf\" (UniqueName: \"kubernetes.io/projected/7086dd56-2ca2-4d9c-ab48-5837b89f117a-kube-api-access-ddngf\") pod \"dnsmasq-dns-9b86998b5-zwbj9\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.438204 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.457531 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.457792 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjvdl\" (UniqueName: \"kubernetes.io/projected/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878-kube-api-access-rjvdl\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.466237 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.475213 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.502610 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.799615 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.887013 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mdg2\" (UniqueName: \"kubernetes.io/projected/016225c8-a52d-43fd-b0f9-b1df2c77a52b-kube-api-access-8mdg2\") pod \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\" (UID: \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\") " Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.887510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/016225c8-a52d-43fd-b0f9-b1df2c77a52b-operator-scripts\") pod \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\" (UID: \"016225c8-a52d-43fd-b0f9-b1df2c77a52b\") " Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.889063 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016225c8-a52d-43fd-b0f9-b1df2c77a52b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "016225c8-a52d-43fd-b0f9-b1df2c77a52b" (UID: "016225c8-a52d-43fd-b0f9-b1df2c77a52b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.898753 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016225c8-a52d-43fd-b0f9-b1df2c77a52b-kube-api-access-8mdg2" (OuterVolumeSpecName: "kube-api-access-8mdg2") pod "016225c8-a52d-43fd-b0f9-b1df2c77a52b" (UID: "016225c8-a52d-43fd-b0f9-b1df2c77a52b"). InnerVolumeSpecName "kube-api-access-8mdg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.907444 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-brt9g"] Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.992333 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mdg2\" (UniqueName: \"kubernetes.io/projected/016225c8-a52d-43fd-b0f9-b1df2c77a52b-kube-api-access-8mdg2\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:50 crc kubenswrapper[4853]: I1209 17:22:50.992367 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/016225c8-a52d-43fd-b0f9-b1df2c77a52b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.026099 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.342585 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca","Type":"ContainerStarted","Data":"c0864eedaadbcd9b1fc2de947eb0b58f577c9049b8258dfa96fb0c1074ec1e60"} Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.356664 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9klzn" event={"ID":"016225c8-a52d-43fd-b0f9-b1df2c77a52b","Type":"ContainerDied","Data":"f37ff45c37488db1928c968e0d3d2472485eeb96a95eda25b4ebce61448981cf"} Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.356718 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37ff45c37488db1928c968e0d3d2472485eeb96a95eda25b4ebce61448981cf" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.356808 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9klzn" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.364611 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-brt9g" event={"ID":"982d513f-97ab-460e-b032-f639b8ef6ff5","Type":"ContainerStarted","Data":"96f77082a1eb7d0bd455031b746609c3c799ee9c196169eca1fa194ed47297a7"} Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.647935 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.702927 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zwbj9"] Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.969155 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.988587 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4szt4"] Dec 09 17:22:51 crc kubenswrapper[4853]: E1209 17:22:51.989163 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878" containerName="mariadb-account-create-update" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.989174 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878" containerName="mariadb-account-create-update" Dec 09 17:22:51 crc kubenswrapper[4853]: E1209 17:22:51.989208 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016225c8-a52d-43fd-b0f9-b1df2c77a52b" containerName="mariadb-database-create" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.989214 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="016225c8-a52d-43fd-b0f9-b1df2c77a52b" containerName="mariadb-database-create" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.989433 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878" containerName="mariadb-account-create-update" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.989454 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="016225c8-a52d-43fd-b0f9-b1df2c77a52b" containerName="mariadb-database-create" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.990231 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.993618 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 17:22:51 crc kubenswrapper[4853]: I1209 17:22:51.994222 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.006558 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.024653 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4szt4"] Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.047443 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-scripts\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.047516 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.047760 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-694wz\" (UniqueName: \"kubernetes.io/projected/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-kube-api-access-694wz\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.047875 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-config-data\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.157109 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-694wz\" (UniqueName: \"kubernetes.io/projected/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-kube-api-access-694wz\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.157301 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-config-data\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.157361 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-scripts\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.157391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.172096 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-scripts\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.176559 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.184429 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-694wz\" (UniqueName: \"kubernetes.io/projected/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-kube-api-access-694wz\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.197432 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-config-data\") pod \"nova-cell1-conductor-db-sync-4szt4\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.316153 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.331843 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.362384 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-sg-core-conf-yaml\") pod \"2b93f329-65de-4a21-b711-adbb0ddab604\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.362475 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-log-httpd\") pod \"2b93f329-65de-4a21-b711-adbb0ddab604\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.362558 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-config-data\") pod \"2b93f329-65de-4a21-b711-adbb0ddab604\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.362659 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-scripts\") pod \"2b93f329-65de-4a21-b711-adbb0ddab604\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.362768 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fzvq\" (UniqueName: \"kubernetes.io/projected/2b93f329-65de-4a21-b711-adbb0ddab604-kube-api-access-6fzvq\") pod \"2b93f329-65de-4a21-b711-adbb0ddab604\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.363165 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-run-httpd\") pod \"2b93f329-65de-4a21-b711-adbb0ddab604\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.363412 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-combined-ca-bundle\") pod \"2b93f329-65de-4a21-b711-adbb0ddab604\" (UID: \"2b93f329-65de-4a21-b711-adbb0ddab604\") " Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.363588 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b93f329-65de-4a21-b711-adbb0ddab604" (UID: "2b93f329-65de-4a21-b711-adbb0ddab604"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.363837 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b93f329-65de-4a21-b711-adbb0ddab604" (UID: "2b93f329-65de-4a21-b711-adbb0ddab604"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.364147 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.364172 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b93f329-65de-4a21-b711-adbb0ddab604-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.367968 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-scripts" (OuterVolumeSpecName: "scripts") pod "2b93f329-65de-4a21-b711-adbb0ddab604" (UID: "2b93f329-65de-4a21-b711-adbb0ddab604"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.374709 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b93f329-65de-4a21-b711-adbb0ddab604-kube-api-access-6fzvq" (OuterVolumeSpecName: "kube-api-access-6fzvq") pod "2b93f329-65de-4a21-b711-adbb0ddab604" (UID: "2b93f329-65de-4a21-b711-adbb0ddab604"). InnerVolumeSpecName "kube-api-access-6fzvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.418280 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b93f329-65de-4a21-b711-adbb0ddab604" (UID: "2b93f329-65de-4a21-b711-adbb0ddab604"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.426052 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f342c50d-793c-4238-b9e4-de14f3473b4b","Type":"ContainerStarted","Data":"1eaef7803f35becc0e5f80a4ca12068a168c303d9c5c207f0128160edcf69b84"} Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.442289 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-brt9g" event={"ID":"982d513f-97ab-460e-b032-f639b8ef6ff5","Type":"ContainerStarted","Data":"412a7c4654232b8b2c283e9d343a15424b43981693e4cdad5eb6e3ae524fa7c3"} Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.454961 4853 generic.go:334] "Generic (PLEG): container finished" podID="2b93f329-65de-4a21-b711-adbb0ddab604" containerID="5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d" exitCode=0 Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.455033 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerDied","Data":"5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d"} Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.455062 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b93f329-65de-4a21-b711-adbb0ddab604","Type":"ContainerDied","Data":"2d41d63e2d3ae2aa7f4c41976d841c5210236534805f5c170e4b87e2d873173a"} Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.455082 4853 scope.go:117] "RemoveContainer" containerID="1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.455220 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.466241 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.466275 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.466286 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fzvq\" (UniqueName: \"kubernetes.io/projected/2b93f329-65de-4a21-b711-adbb0ddab604-kube-api-access-6fzvq\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.503546 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8bcf006-6ed4-49d8-92a7-66f5fb141491","Type":"ContainerStarted","Data":"cc001eb89b13eea5b9e5a494d2b03c7a5f1186aeab92660bd8de11e2cd60f83b"} Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.560878 4853 generic.go:334] "Generic (PLEG): container finished" podID="7086dd56-2ca2-4d9c-ab48-5837b89f117a" containerID="4e4c061abbd3234034482daab3a616450459b3d239aaafd2996d4bd8b9ea4303" exitCode=0 Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.560965 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" event={"ID":"7086dd56-2ca2-4d9c-ab48-5837b89f117a","Type":"ContainerDied","Data":"4e4c061abbd3234034482daab3a616450459b3d239aaafd2996d4bd8b9ea4303"} Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.560989 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" event={"ID":"7086dd56-2ca2-4d9c-ab48-5837b89f117a","Type":"ContainerStarted","Data":"1941d328c703a3314df85feafaccc7290fc0d25d8c79929641017ab2253c46b0"} Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.590618 4853 scope.go:117] "RemoveContainer" containerID="c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.590935 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dbb040fb-b035-41b8-82c7-b94858c83360","Type":"ContainerStarted","Data":"64c04e7d047d59b71e3bf3f17cb463c99b536e559f7e72d637644cda3f2a9393"} Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.593640 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-brt9g" podStartSLOduration=3.5936158049999998 podStartE2EDuration="3.593615805s" podCreationTimestamp="2025-12-09 17:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:22:52.480214212 +0000 UTC m=+1599.414953384" watchObservedRunningTime="2025-12-09 17:22:52.593615805 +0000 UTC m=+1599.528354997" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.625036 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b93f329-65de-4a21-b711-adbb0ddab604" (UID: "2b93f329-65de-4a21-b711-adbb0ddab604"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.675188 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.711314 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-config-data" (OuterVolumeSpecName: "config-data") pod "2b93f329-65de-4a21-b711-adbb0ddab604" (UID: "2b93f329-65de-4a21-b711-adbb0ddab604"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.778662 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b93f329-65de-4a21-b711-adbb0ddab604-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.834418 4853 scope.go:117] "RemoveContainer" containerID="b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.855216 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.888403 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.912828 4853 scope.go:117] "RemoveContainer" containerID="5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.931421 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:52 crc kubenswrapper[4853]: E1209 17:22:52.931999 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="proxy-httpd" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.932019 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="proxy-httpd" Dec 09 17:22:52 crc kubenswrapper[4853]: E1209 17:22:52.932038 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="sg-core" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.932044 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="sg-core" Dec 09 17:22:52 crc kubenswrapper[4853]: E1209 17:22:52.932072 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="ceilometer-central-agent" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.932078 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="ceilometer-central-agent" Dec 09 17:22:52 crc kubenswrapper[4853]: E1209 17:22:52.932104 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="ceilometer-notification-agent" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.932109 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="ceilometer-notification-agent" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.932316 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="ceilometer-notification-agent" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.932336 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="proxy-httpd" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.932347 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="sg-core" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.932365 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" containerName="ceilometer-central-agent" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.934654 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.937102 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.937394 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.956876 4853 scope.go:117] "RemoveContainer" containerID="1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb" Dec 09 17:22:52 crc kubenswrapper[4853]: E1209 17:22:52.957836 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb\": container with ID starting with 1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb not found: ID does not exist" containerID="1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.957887 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb"} err="failed to get container status \"1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb\": rpc error: code = NotFound desc = could not find container \"1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb\": container with ID starting with 1ff59cac600a1cc909e6ada8b242edb236ff4eaf2ad5bcc1cdae9106fce7c1eb not found: ID does not exist" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.957918 4853 scope.go:117] "RemoveContainer" containerID="c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020" Dec 09 17:22:52 crc kubenswrapper[4853]: E1209 17:22:52.958472 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020\": container with ID starting with c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020 not found: ID does not exist" containerID="c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.958500 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020"} err="failed to get container status \"c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020\": rpc error: code = NotFound desc = could not find container \"c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020\": container with ID starting with c8eb55df82f823bbdbcff0e6dbd609af645b51f37c494e82f93d442890f85020 not found: ID does not exist" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.958514 4853 scope.go:117] "RemoveContainer" containerID="b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659" Dec 09 17:22:52 crc kubenswrapper[4853]: E1209 17:22:52.959880 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659\": container with ID starting with b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659 not found: ID does not exist" containerID="b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.959904 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659"} err="failed to get container status \"b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659\": rpc error: code = NotFound desc = could not find container \"b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659\": container with ID starting with b49d1bf5ffe81a3137d991ff8978ae224309e0a99981a83d4d6242fd69dde659 not found: ID does not exist" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.959918 4853 scope.go:117] "RemoveContainer" containerID="5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d" Dec 09 17:22:52 crc kubenswrapper[4853]: E1209 17:22:52.960805 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d\": container with ID starting with 5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d not found: ID does not exist" containerID="5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.960843 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d"} err="failed to get container status \"5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d\": rpc error: code = NotFound desc = could not find container \"5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d\": container with ID starting with 5c652af227d3ce40bc5f82534e16e1b6a0b3e1adf105ba1a22e9dcaf1f0d963d not found: ID does not exist" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.962399 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:52 crc kubenswrapper[4853]: W1209 17:22:52.971557 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fbf8680_15b3_40ea_aed2_16f33ed9c8fe.slice/crio-641cc956fd76737ff07cc105cecf0a5602c93ca56c973cc51dad62f1d484150d WatchSource:0}: Error finding container 641cc956fd76737ff07cc105cecf0a5602c93ca56c973cc51dad62f1d484150d: Status 404 returned error can't find the container with id 641cc956fd76737ff07cc105cecf0a5602c93ca56c973cc51dad62f1d484150d Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.976212 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4szt4"] Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.983687 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-scripts\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.983744 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.983878 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9gp\" (UniqueName: \"kubernetes.io/projected/a792bdbf-43dc-431a-98ae-d066e9f177f0-kube-api-access-2l9gp\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.983919 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-run-httpd\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.983966 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-config-data\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.984005 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:52 crc kubenswrapper[4853]: I1209 17:22:52.984034 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-log-httpd\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.089534 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-scripts\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.090385 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.090662 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9gp\" (UniqueName: \"kubernetes.io/projected/a792bdbf-43dc-431a-98ae-d066e9f177f0-kube-api-access-2l9gp\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.090721 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-run-httpd\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.090793 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-config-data\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.090882 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.090915 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-log-httpd\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.091582 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-run-httpd\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.092237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-log-httpd\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.097292 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-scripts\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.101157 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.103918 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.105168 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-config-data\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.135392 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9gp\" (UniqueName: \"kubernetes.io/projected/a792bdbf-43dc-431a-98ae-d066e9f177f0-kube-api-access-2l9gp\") pod \"ceilometer-0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.274738 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.591810 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b93f329-65de-4a21-b711-adbb0ddab604" path="/var/lib/kubelet/pods/2b93f329-65de-4a21-b711-adbb0ddab604/volumes" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.701877 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" event={"ID":"7086dd56-2ca2-4d9c-ab48-5837b89f117a","Type":"ContainerStarted","Data":"d835ad1c0ab14fd20632c7945c31ed32a0697be7aa734d39e5519452ed869585"} Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.703111 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.748498 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4szt4" event={"ID":"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe","Type":"ContainerStarted","Data":"624e36bb2913de090322018d8e37f756c7a1bac84a7a75b3e8baecebc88a9fae"} Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.748547 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4szt4" event={"ID":"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe","Type":"ContainerStarted","Data":"641cc956fd76737ff07cc105cecf0a5602c93ca56c973cc51dad62f1d484150d"} Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.942298 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.950657 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" podStartSLOduration=4.950637461 podStartE2EDuration="4.950637461s" podCreationTimestamp="2025-12-09 17:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:22:53.817799483 +0000 UTC m=+1600.752538675" watchObservedRunningTime="2025-12-09 17:22:53.950637461 +0000 UTC m=+1600.885376643" Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.968653 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:22:53 crc kubenswrapper[4853]: I1209 17:22:53.969990 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4szt4" podStartSLOduration=2.969979691 podStartE2EDuration="2.969979691s" podCreationTimestamp="2025-12-09 17:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:22:53.843342648 +0000 UTC m=+1600.778081830" watchObservedRunningTime="2025-12-09 17:22:53.969979691 +0000 UTC m=+1600.904718873" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.654694 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-4x7tx"] Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.656666 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.660651 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.660828 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.661059 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.661247 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kq94v" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.669808 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-4x7tx"] Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.791524 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8flz\" (UniqueName: \"kubernetes.io/projected/23f1f371-2f71-401a-8c20-d400f873f3d1-kube-api-access-p8flz\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.791940 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-config-data\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.792347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-scripts\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.792477 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-combined-ca-bundle\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.900303 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-combined-ca-bundle\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.902294 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8flz\" (UniqueName: \"kubernetes.io/projected/23f1f371-2f71-401a-8c20-d400f873f3d1-kube-api-access-p8flz\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.903024 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-config-data\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.903145 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-scripts\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.928492 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-combined-ca-bundle\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.928627 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-config-data\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.932393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-scripts\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.936119 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8flz\" (UniqueName: \"kubernetes.io/projected/23f1f371-2f71-401a-8c20-d400f873f3d1-kube-api-access-p8flz\") pod \"aodh-db-sync-4x7tx\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:56 crc kubenswrapper[4853]: I1209 17:22:56.981493 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.307108 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.661677 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-4x7tx"] Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.819937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dbb040fb-b035-41b8-82c7-b94858c83360","Type":"ContainerStarted","Data":"23fd1afd0437cfc7a7f24b768805570a765bf54cc4ebed7e501aa9ee970c576a"} Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.820196 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="dbb040fb-b035-41b8-82c7-b94858c83360" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://23fd1afd0437cfc7a7f24b768805570a765bf54cc4ebed7e501aa9ee970c576a" gracePeriod=30 Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.822129 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4x7tx" event={"ID":"23f1f371-2f71-401a-8c20-d400f873f3d1","Type":"ContainerStarted","Data":"a188f5b3d52a83b0acd4d42a5d360d3b13553d413f3d99b22eba2b59d3586409"} Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.828143 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f342c50d-793c-4238-b9e4-de14f3473b4b","Type":"ContainerStarted","Data":"ca10fd94240a74555b9fd6cd6d00012213f533f7ee0947ec37d6b5504fc45894"} Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.828193 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f342c50d-793c-4238-b9e4-de14f3473b4b","Type":"ContainerStarted","Data":"c9d4e438b848cb51444667a59dcfb1d32e64ce9ef2f2a7bca3962c328f792b0b"} Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.828279 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerName="nova-metadata-metadata" containerID="cri-o://ca10fd94240a74555b9fd6cd6d00012213f533f7ee0947ec37d6b5504fc45894" gracePeriod=30 Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.828273 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerName="nova-metadata-log" containerID="cri-o://c9d4e438b848cb51444667a59dcfb1d32e64ce9ef2f2a7bca3962c328f792b0b" gracePeriod=30 Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.829478 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerStarted","Data":"5a1ad19ca12ed50e8630d62ee0da5ad7592b2ed127c11511df7dd6e40c5dcf8d"} Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.832719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8bcf006-6ed4-49d8-92a7-66f5fb141491","Type":"ContainerStarted","Data":"69b0624fe307eca847b467dd0b71b8e6727b273a237e5035f8fc248f67c08c98"} Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.837180 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca","Type":"ContainerStarted","Data":"d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b"} Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.851263 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.029377299 podStartE2EDuration="8.851246997s" podCreationTimestamp="2025-12-09 17:22:49 +0000 UTC" firstStartedPulling="2025-12-09 17:22:51.953398119 +0000 UTC m=+1598.888137301" lastFinishedPulling="2025-12-09 17:22:56.775267817 +0000 UTC m=+1603.710006999" observedRunningTime="2025-12-09 17:22:57.850058054 +0000 UTC m=+1604.784797236" watchObservedRunningTime="2025-12-09 17:22:57.851246997 +0000 UTC m=+1604.785986179" Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.907137 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.097783014 podStartE2EDuration="8.90710033s" podCreationTimestamp="2025-12-09 17:22:49 +0000 UTC" firstStartedPulling="2025-12-09 17:22:51.966449244 +0000 UTC m=+1598.901188426" lastFinishedPulling="2025-12-09 17:22:56.77576656 +0000 UTC m=+1603.710505742" observedRunningTime="2025-12-09 17:22:57.879192749 +0000 UTC m=+1604.813931931" watchObservedRunningTime="2025-12-09 17:22:57.90710033 +0000 UTC m=+1604.841839512" Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.950324 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.200768701 podStartE2EDuration="8.950303759s" podCreationTimestamp="2025-12-09 17:22:49 +0000 UTC" firstStartedPulling="2025-12-09 17:22:51.030708618 +0000 UTC m=+1597.965447790" lastFinishedPulling="2025-12-09 17:22:56.780243666 +0000 UTC m=+1603.714982848" observedRunningTime="2025-12-09 17:22:57.896527284 +0000 UTC m=+1604.831266456" watchObservedRunningTime="2025-12-09 17:22:57.950303759 +0000 UTC m=+1604.885042941" Dec 09 17:22:57 crc kubenswrapper[4853]: I1209 17:22:57.976486 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.900475953 podStartE2EDuration="8.976464392s" podCreationTimestamp="2025-12-09 17:22:49 +0000 UTC" firstStartedPulling="2025-12-09 17:22:51.69972384 +0000 UTC m=+1598.634463022" lastFinishedPulling="2025-12-09 17:22:56.775712269 +0000 UTC m=+1603.710451461" observedRunningTime="2025-12-09 17:22:57.92711481 +0000 UTC m=+1604.861854062" watchObservedRunningTime="2025-12-09 17:22:57.976464392 +0000 UTC m=+1604.911203574" Dec 09 17:22:58 crc kubenswrapper[4853]: I1209 17:22:58.592809 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:22:58 crc kubenswrapper[4853]: I1209 17:22:58.592870 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:22:58 crc kubenswrapper[4853]: I1209 17:22:58.870355 4853 generic.go:334] "Generic (PLEG): container finished" podID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerID="c9d4e438b848cb51444667a59dcfb1d32e64ce9ef2f2a7bca3962c328f792b0b" exitCode=143 Dec 09 17:22:58 crc kubenswrapper[4853]: I1209 17:22:58.870374 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f342c50d-793c-4238-b9e4-de14f3473b4b","Type":"ContainerDied","Data":"c9d4e438b848cb51444667a59dcfb1d32e64ce9ef2f2a7bca3962c328f792b0b"} Dec 09 17:22:58 crc kubenswrapper[4853]: I1209 17:22:58.874357 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerStarted","Data":"9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00"} Dec 09 17:22:58 crc kubenswrapper[4853]: I1209 17:22:58.877770 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8bcf006-6ed4-49d8-92a7-66f5fb141491","Type":"ContainerStarted","Data":"2efdf14a10d34093ed8edf3bb1b6430d42e06181885a3a7c017789c6f5d59df7"} Dec 09 17:22:59 crc kubenswrapper[4853]: I1209 17:22:59.891139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerStarted","Data":"0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3"} Dec 09 17:22:59 crc kubenswrapper[4853]: I1209 17:22:59.935688 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 17:22:59 crc kubenswrapper[4853]: I1209 17:22:59.935777 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 17:22:59 crc kubenswrapper[4853]: I1209 17:22:59.975474 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.440065 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.440110 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.467267 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.467334 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.476538 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.504859 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.571263 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-dhffl"] Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.571467 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" podUID="f140945e-1f28-41d6-b3c5-f09100c204df" containerName="dnsmasq-dns" containerID="cri-o://9e830dd7cbcf906cc5234eb68a6225a40d14a5c80b3f6c6211654aaa99d69f15" gracePeriod=10 Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.910748 4853 generic.go:334] "Generic (PLEG): container finished" podID="982d513f-97ab-460e-b032-f639b8ef6ff5" containerID="412a7c4654232b8b2c283e9d343a15424b43981693e4cdad5eb6e3ae524fa7c3" exitCode=0 Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.910815 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-brt9g" event={"ID":"982d513f-97ab-460e-b032-f639b8ef6ff5","Type":"ContainerDied","Data":"412a7c4654232b8b2c283e9d343a15424b43981693e4cdad5eb6e3ae524fa7c3"} Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.914712 4853 generic.go:334] "Generic (PLEG): container finished" podID="f140945e-1f28-41d6-b3c5-f09100c204df" containerID="9e830dd7cbcf906cc5234eb68a6225a40d14a5c80b3f6c6211654aaa99d69f15" exitCode=0 Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.914800 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" event={"ID":"f140945e-1f28-41d6-b3c5-f09100c204df","Type":"ContainerDied","Data":"9e830dd7cbcf906cc5234eb68a6225a40d14a5c80b3f6c6211654aaa99d69f15"} Dec 09 17:23:00 crc kubenswrapper[4853]: I1209 17:23:00.969106 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 17:23:01 crc kubenswrapper[4853]: I1209 17:23:01.523751 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 17:23:01 crc kubenswrapper[4853]: I1209 17:23:01.523906 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 17:23:01 crc kubenswrapper[4853]: I1209 17:23:01.949623 4853 generic.go:334] "Generic (PLEG): container finished" podID="6fbf8680-15b3-40ea-aed2-16f33ed9c8fe" containerID="624e36bb2913de090322018d8e37f756c7a1bac84a7a75b3e8baecebc88a9fae" exitCode=0 Dec 09 17:23:01 crc kubenswrapper[4853]: I1209 17:23:01.949862 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4szt4" event={"ID":"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe","Type":"ContainerDied","Data":"624e36bb2913de090322018d8e37f756c7a1bac84a7a75b3e8baecebc88a9fae"} Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.641995 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.753332 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-config-data\") pod \"982d513f-97ab-460e-b032-f639b8ef6ff5\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.753410 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-scripts\") pod \"982d513f-97ab-460e-b032-f639b8ef6ff5\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.753577 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-combined-ca-bundle\") pod \"982d513f-97ab-460e-b032-f639b8ef6ff5\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.753776 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfmw8\" (UniqueName: \"kubernetes.io/projected/982d513f-97ab-460e-b032-f639b8ef6ff5-kube-api-access-pfmw8\") pod \"982d513f-97ab-460e-b032-f639b8ef6ff5\" (UID: \"982d513f-97ab-460e-b032-f639b8ef6ff5\") " Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.759259 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-scripts" (OuterVolumeSpecName: "scripts") pod "982d513f-97ab-460e-b032-f639b8ef6ff5" (UID: "982d513f-97ab-460e-b032-f639b8ef6ff5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.761287 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982d513f-97ab-460e-b032-f639b8ef6ff5-kube-api-access-pfmw8" (OuterVolumeSpecName: "kube-api-access-pfmw8") pod "982d513f-97ab-460e-b032-f639b8ef6ff5" (UID: "982d513f-97ab-460e-b032-f639b8ef6ff5"). InnerVolumeSpecName "kube-api-access-pfmw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.794887 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-config-data" (OuterVolumeSpecName: "config-data") pod "982d513f-97ab-460e-b032-f639b8ef6ff5" (UID: "982d513f-97ab-460e-b032-f639b8ef6ff5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.796561 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "982d513f-97ab-460e-b032-f639b8ef6ff5" (UID: "982d513f-97ab-460e-b032-f639b8ef6ff5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.858689 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.858720 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.858729 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982d513f-97ab-460e-b032-f639b8ef6ff5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:03 crc kubenswrapper[4853]: I1209 17:23:03.858740 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfmw8\" (UniqueName: \"kubernetes.io/projected/982d513f-97ab-460e-b032-f639b8ef6ff5-kube-api-access-pfmw8\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.005490 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-brt9g" event={"ID":"982d513f-97ab-460e-b032-f639b8ef6ff5","Type":"ContainerDied","Data":"96f77082a1eb7d0bd455031b746609c3c799ee9c196169eca1fa194ed47297a7"} Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.005531 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f77082a1eb7d0bd455031b746609c3c799ee9c196169eca1fa194ed47297a7" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.005542 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-brt9g" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.015339 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" podUID="f140945e-1f28-41d6-b3c5-f09100c204df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.215:5353: connect: connection refused" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.257440 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.371239 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-scripts\") pod \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.371941 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-694wz\" (UniqueName: \"kubernetes.io/projected/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-kube-api-access-694wz\") pod \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.372065 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-combined-ca-bundle\") pod \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.372140 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-config-data\") pod \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\" (UID: \"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.376586 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-scripts" (OuterVolumeSpecName: "scripts") pod "6fbf8680-15b3-40ea-aed2-16f33ed9c8fe" (UID: "6fbf8680-15b3-40ea-aed2-16f33ed9c8fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.379761 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-kube-api-access-694wz" (OuterVolumeSpecName: "kube-api-access-694wz") pod "6fbf8680-15b3-40ea-aed2-16f33ed9c8fe" (UID: "6fbf8680-15b3-40ea-aed2-16f33ed9c8fe"). InnerVolumeSpecName "kube-api-access-694wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.454847 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-config-data" (OuterVolumeSpecName: "config-data") pod "6fbf8680-15b3-40ea-aed2-16f33ed9c8fe" (UID: "6fbf8680-15b3-40ea-aed2-16f33ed9c8fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.469325 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fbf8680-15b3-40ea-aed2-16f33ed9c8fe" (UID: "6fbf8680-15b3-40ea-aed2-16f33ed9c8fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.476136 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.476183 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.476195 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-694wz\" (UniqueName: \"kubernetes.io/projected/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-kube-api-access-694wz\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.476208 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.664051 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.786319 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-config\") pod \"f140945e-1f28-41d6-b3c5-f09100c204df\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.786470 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-swift-storage-0\") pod \"f140945e-1f28-41d6-b3c5-f09100c204df\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.786567 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-sb\") pod \"f140945e-1f28-41d6-b3c5-f09100c204df\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.786763 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-nb\") pod \"f140945e-1f28-41d6-b3c5-f09100c204df\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.786813 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmhcg\" (UniqueName: \"kubernetes.io/projected/f140945e-1f28-41d6-b3c5-f09100c204df-kube-api-access-lmhcg\") pod \"f140945e-1f28-41d6-b3c5-f09100c204df\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.786870 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-svc\") pod \"f140945e-1f28-41d6-b3c5-f09100c204df\" (UID: \"f140945e-1f28-41d6-b3c5-f09100c204df\") " Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.792291 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f140945e-1f28-41d6-b3c5-f09100c204df-kube-api-access-lmhcg" (OuterVolumeSpecName: "kube-api-access-lmhcg") pod "f140945e-1f28-41d6-b3c5-f09100c204df" (UID: "f140945e-1f28-41d6-b3c5-f09100c204df"). InnerVolumeSpecName "kube-api-access-lmhcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.855880 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.856135 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-log" containerID="cri-o://69b0624fe307eca847b467dd0b71b8e6727b273a237e5035f8fc248f67c08c98" gracePeriod=30 Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.856642 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-api" containerID="cri-o://2efdf14a10d34093ed8edf3bb1b6430d42e06181885a3a7c017789c6f5d59df7" gracePeriod=30 Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.857459 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f140945e-1f28-41d6-b3c5-f09100c204df" (UID: "f140945e-1f28-41d6-b3c5-f09100c204df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.869538 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-config" (OuterVolumeSpecName: "config") pod "f140945e-1f28-41d6-b3c5-f09100c204df" (UID: "f140945e-1f28-41d6-b3c5-f09100c204df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.894834 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f140945e-1f28-41d6-b3c5-f09100c204df" (UID: "f140945e-1f28-41d6-b3c5-f09100c204df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.895398 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.895439 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmhcg\" (UniqueName: \"kubernetes.io/projected/f140945e-1f28-41d6-b3c5-f09100c204df-kube-api-access-lmhcg\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.895451 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.895460 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.896028 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f140945e-1f28-41d6-b3c5-f09100c204df" (UID: "f140945e-1f28-41d6-b3c5-f09100c204df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.908590 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.908884 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" containerName="nova-scheduler-scheduler" containerID="cri-o://d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b" gracePeriod=30 Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.918672 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f140945e-1f28-41d6-b3c5-f09100c204df" (UID: "f140945e-1f28-41d6-b3c5-f09100c204df"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:04 crc kubenswrapper[4853]: E1209 17:23:04.944082 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 17:23:04 crc kubenswrapper[4853]: E1209 17:23:04.952585 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 17:23:04 crc kubenswrapper[4853]: E1209 17:23:04.954450 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 17:23:04 crc kubenswrapper[4853]: E1209 17:23:04.954514 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" containerName="nova-scheduler-scheduler" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.998339 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:04 crc kubenswrapper[4853]: I1209 17:23:04.998379 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f140945e-1f28-41d6-b3c5-f09100c204df-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.019825 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" event={"ID":"f140945e-1f28-41d6-b3c5-f09100c204df","Type":"ContainerDied","Data":"1192e6c93923c70b0c1557902831ec2268ce63b001c583d4f106280c4344791a"} Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.019873 4853 scope.go:117] "RemoveContainer" containerID="9e830dd7cbcf906cc5234eb68a6225a40d14a5c80b3f6c6211654aaa99d69f15" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.020012 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-dhffl" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.027003 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4x7tx" event={"ID":"23f1f371-2f71-401a-8c20-d400f873f3d1","Type":"ContainerStarted","Data":"7725319fd781eba583befbc4b11e0edf73401339a32c0a8e8a07d532719c2593"} Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.029446 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4szt4" event={"ID":"6fbf8680-15b3-40ea-aed2-16f33ed9c8fe","Type":"ContainerDied","Data":"641cc956fd76737ff07cc105cecf0a5602c93ca56c973cc51dad62f1d484150d"} Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.029472 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="641cc956fd76737ff07cc105cecf0a5602c93ca56c973cc51dad62f1d484150d" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.029510 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4szt4" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.039462 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerStarted","Data":"bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb"} Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.042986 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerID="69b0624fe307eca847b467dd0b71b8e6727b273a237e5035f8fc248f67c08c98" exitCode=143 Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.043046 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8bcf006-6ed4-49d8-92a7-66f5fb141491","Type":"ContainerDied","Data":"69b0624fe307eca847b467dd0b71b8e6727b273a237e5035f8fc248f67c08c98"} Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.044875 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-4x7tx" podStartSLOduration=2.453210873 podStartE2EDuration="9.044855286s" podCreationTimestamp="2025-12-09 17:22:56 +0000 UTC" firstStartedPulling="2025-12-09 17:22:57.703322788 +0000 UTC m=+1604.638061970" lastFinishedPulling="2025-12-09 17:23:04.294967201 +0000 UTC m=+1611.229706383" observedRunningTime="2025-12-09 17:23:05.043257381 +0000 UTC m=+1611.977996563" watchObservedRunningTime="2025-12-09 17:23:05.044855286 +0000 UTC m=+1611.979594468" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.059030 4853 scope.go:117] "RemoveContainer" containerID="44796b9f1f7a163f31dff7c99dc7bd54dac12fc17e093b81f5bda1022df0aed0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.079985 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-dhffl"] Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.091118 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-dhffl"] Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.349750 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 17:23:05 crc kubenswrapper[4853]: E1209 17:23:05.351412 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f140945e-1f28-41d6-b3c5-f09100c204df" containerName="dnsmasq-dns" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.351457 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f140945e-1f28-41d6-b3c5-f09100c204df" containerName="dnsmasq-dns" Dec 09 17:23:05 crc kubenswrapper[4853]: E1209 17:23:05.351517 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982d513f-97ab-460e-b032-f639b8ef6ff5" containerName="nova-manage" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.351526 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="982d513f-97ab-460e-b032-f639b8ef6ff5" containerName="nova-manage" Dec 09 17:23:05 crc kubenswrapper[4853]: E1209 17:23:05.351572 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbf8680-15b3-40ea-aed2-16f33ed9c8fe" containerName="nova-cell1-conductor-db-sync" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.351580 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbf8680-15b3-40ea-aed2-16f33ed9c8fe" containerName="nova-cell1-conductor-db-sync" Dec 09 17:23:05 crc kubenswrapper[4853]: E1209 17:23:05.351613 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f140945e-1f28-41d6-b3c5-f09100c204df" containerName="init" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.351621 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f140945e-1f28-41d6-b3c5-f09100c204df" containerName="init" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.351921 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f140945e-1f28-41d6-b3c5-f09100c204df" containerName="dnsmasq-dns" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.351942 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="982d513f-97ab-460e-b032-f639b8ef6ff5" containerName="nova-manage" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.351979 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbf8680-15b3-40ea-aed2-16f33ed9c8fe" containerName="nova-cell1-conductor-db-sync" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.353670 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.356781 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.365227 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.518655 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn7pg\" (UniqueName: \"kubernetes.io/projected/ba91b779-9f16-44fa-97db-a6e51125893a-kube-api-access-dn7pg\") pod \"nova-cell1-conductor-0\" (UID: \"ba91b779-9f16-44fa-97db-a6e51125893a\") " pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.518837 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba91b779-9f16-44fa-97db-a6e51125893a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ba91b779-9f16-44fa-97db-a6e51125893a\") " pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.519058 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba91b779-9f16-44fa-97db-a6e51125893a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ba91b779-9f16-44fa-97db-a6e51125893a\") " pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.587149 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f140945e-1f28-41d6-b3c5-f09100c204df" path="/var/lib/kubelet/pods/f140945e-1f28-41d6-b3c5-f09100c204df/volumes" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.622751 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn7pg\" (UniqueName: \"kubernetes.io/projected/ba91b779-9f16-44fa-97db-a6e51125893a-kube-api-access-dn7pg\") pod \"nova-cell1-conductor-0\" (UID: \"ba91b779-9f16-44fa-97db-a6e51125893a\") " pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.622901 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba91b779-9f16-44fa-97db-a6e51125893a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ba91b779-9f16-44fa-97db-a6e51125893a\") " pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.622973 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba91b779-9f16-44fa-97db-a6e51125893a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ba91b779-9f16-44fa-97db-a6e51125893a\") " pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.634488 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba91b779-9f16-44fa-97db-a6e51125893a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ba91b779-9f16-44fa-97db-a6e51125893a\") " pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.637964 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba91b779-9f16-44fa-97db-a6e51125893a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ba91b779-9f16-44fa-97db-a6e51125893a\") " pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.654225 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn7pg\" (UniqueName: \"kubernetes.io/projected/ba91b779-9f16-44fa-97db-a6e51125893a-kube-api-access-dn7pg\") pod \"nova-cell1-conductor-0\" (UID: \"ba91b779-9f16-44fa-97db-a6e51125893a\") " pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:05 crc kubenswrapper[4853]: I1209 17:23:05.687045 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:06 crc kubenswrapper[4853]: I1209 17:23:06.076982 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerStarted","Data":"04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d"} Dec 09 17:23:06 crc kubenswrapper[4853]: I1209 17:23:06.077984 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:23:06 crc kubenswrapper[4853]: I1209 17:23:06.108666 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.661003603 podStartE2EDuration="14.108644636s" podCreationTimestamp="2025-12-09 17:22:52 +0000 UTC" firstStartedPulling="2025-12-09 17:22:57.328510838 +0000 UTC m=+1604.263250020" lastFinishedPulling="2025-12-09 17:23:05.776151871 +0000 UTC m=+1612.710891053" observedRunningTime="2025-12-09 17:23:06.099355485 +0000 UTC m=+1613.034094687" watchObservedRunningTime="2025-12-09 17:23:06.108644636 +0000 UTC m=+1613.043383818" Dec 09 17:23:06 crc kubenswrapper[4853]: W1209 17:23:06.257887 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba91b779_9f16_44fa_97db_a6e51125893a.slice/crio-85e9c732aded305af474c3e857b22b6f9dc073b4bd83402fe8f9d7c691f45dc5 WatchSource:0}: Error finding container 85e9c732aded305af474c3e857b22b6f9dc073b4bd83402fe8f9d7c691f45dc5: Status 404 returned error can't find the container with id 85e9c732aded305af474c3e857b22b6f9dc073b4bd83402fe8f9d7c691f45dc5 Dec 09 17:23:06 crc kubenswrapper[4853]: I1209 17:23:06.266157 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 17:23:07 crc kubenswrapper[4853]: I1209 17:23:07.094626 4853 generic.go:334] "Generic (PLEG): container finished" podID="23f1f371-2f71-401a-8c20-d400f873f3d1" containerID="7725319fd781eba583befbc4b11e0edf73401339a32c0a8e8a07d532719c2593" exitCode=0 Dec 09 17:23:07 crc kubenswrapper[4853]: I1209 17:23:07.094757 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4x7tx" event={"ID":"23f1f371-2f71-401a-8c20-d400f873f3d1","Type":"ContainerDied","Data":"7725319fd781eba583befbc4b11e0edf73401339a32c0a8e8a07d532719c2593"} Dec 09 17:23:07 crc kubenswrapper[4853]: I1209 17:23:07.100425 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ba91b779-9f16-44fa-97db-a6e51125893a","Type":"ContainerStarted","Data":"4c86c774e09f54f872bd1969f9bb7c16f27e1743c156a5b785f1c67c70e32569"} Dec 09 17:23:07 crc kubenswrapper[4853]: I1209 17:23:07.100496 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ba91b779-9f16-44fa-97db-a6e51125893a","Type":"ContainerStarted","Data":"85e9c732aded305af474c3e857b22b6f9dc073b4bd83402fe8f9d7c691f45dc5"} Dec 09 17:23:07 crc kubenswrapper[4853]: I1209 17:23:07.100582 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:07 crc kubenswrapper[4853]: I1209 17:23:07.153826 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.153801574 podStartE2EDuration="2.153801574s" podCreationTimestamp="2025-12-09 17:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:07.142622961 +0000 UTC m=+1614.077362143" watchObservedRunningTime="2025-12-09 17:23:07.153801574 +0000 UTC m=+1614.088540756" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.114185 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerID="2efdf14a10d34093ed8edf3bb1b6430d42e06181885a3a7c017789c6f5d59df7" exitCode=0 Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.114264 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8bcf006-6ed4-49d8-92a7-66f5fb141491","Type":"ContainerDied","Data":"2efdf14a10d34093ed8edf3bb1b6430d42e06181885a3a7c017789c6f5d59df7"} Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.689250 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.695321 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.716324 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8bcf006-6ed4-49d8-92a7-66f5fb141491-logs\") pod \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.716399 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-scripts\") pod \"23f1f371-2f71-401a-8c20-d400f873f3d1\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.716793 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bcf006-6ed4-49d8-92a7-66f5fb141491-logs" (OuterVolumeSpecName: "logs") pod "b8bcf006-6ed4-49d8-92a7-66f5fb141491" (UID: "b8bcf006-6ed4-49d8-92a7-66f5fb141491"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.716816 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-combined-ca-bundle\") pod \"23f1f371-2f71-401a-8c20-d400f873f3d1\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.716890 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9cf9\" (UniqueName: \"kubernetes.io/projected/b8bcf006-6ed4-49d8-92a7-66f5fb141491-kube-api-access-t9cf9\") pod \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.717939 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8bcf006-6ed4-49d8-92a7-66f5fb141491-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.724422 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bcf006-6ed4-49d8-92a7-66f5fb141491-kube-api-access-t9cf9" (OuterVolumeSpecName: "kube-api-access-t9cf9") pod "b8bcf006-6ed4-49d8-92a7-66f5fb141491" (UID: "b8bcf006-6ed4-49d8-92a7-66f5fb141491"). InnerVolumeSpecName "kube-api-access-t9cf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.724640 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-scripts" (OuterVolumeSpecName: "scripts") pod "23f1f371-2f71-401a-8c20-d400f873f3d1" (UID: "23f1f371-2f71-401a-8c20-d400f873f3d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.750926 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23f1f371-2f71-401a-8c20-d400f873f3d1" (UID: "23f1f371-2f71-401a-8c20-d400f873f3d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.819068 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8flz\" (UniqueName: \"kubernetes.io/projected/23f1f371-2f71-401a-8c20-d400f873f3d1-kube-api-access-p8flz\") pod \"23f1f371-2f71-401a-8c20-d400f873f3d1\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.819562 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-combined-ca-bundle\") pod \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.819824 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-config-data\") pod \"23f1f371-2f71-401a-8c20-d400f873f3d1\" (UID: \"23f1f371-2f71-401a-8c20-d400f873f3d1\") " Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.820253 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-config-data\") pod \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\" (UID: \"b8bcf006-6ed4-49d8-92a7-66f5fb141491\") " Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.821229 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.821383 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9cf9\" (UniqueName: \"kubernetes.io/projected/b8bcf006-6ed4-49d8-92a7-66f5fb141491-kube-api-access-t9cf9\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.821466 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.823332 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f1f371-2f71-401a-8c20-d400f873f3d1-kube-api-access-p8flz" (OuterVolumeSpecName: "kube-api-access-p8flz") pod "23f1f371-2f71-401a-8c20-d400f873f3d1" (UID: "23f1f371-2f71-401a-8c20-d400f873f3d1"). InnerVolumeSpecName "kube-api-access-p8flz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.858817 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-config-data" (OuterVolumeSpecName: "config-data") pod "23f1f371-2f71-401a-8c20-d400f873f3d1" (UID: "23f1f371-2f71-401a-8c20-d400f873f3d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.868322 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-config-data" (OuterVolumeSpecName: "config-data") pod "b8bcf006-6ed4-49d8-92a7-66f5fb141491" (UID: "b8bcf006-6ed4-49d8-92a7-66f5fb141491"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.870731 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8bcf006-6ed4-49d8-92a7-66f5fb141491" (UID: "b8bcf006-6ed4-49d8-92a7-66f5fb141491"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.922786 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23f1f371-2f71-401a-8c20-d400f873f3d1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.922818 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.922828 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8flz\" (UniqueName: \"kubernetes.io/projected/23f1f371-2f71-401a-8c20-d400f873f3d1-kube-api-access-p8flz\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:08 crc kubenswrapper[4853]: I1209 17:23:08.922837 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bcf006-6ed4-49d8-92a7-66f5fb141491-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.138411 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.139203 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b8bcf006-6ed4-49d8-92a7-66f5fb141491","Type":"ContainerDied","Data":"cc001eb89b13eea5b9e5a494d2b03c7a5f1186aeab92660bd8de11e2cd60f83b"} Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.139267 4853 scope.go:117] "RemoveContainer" containerID="2efdf14a10d34093ed8edf3bb1b6430d42e06181885a3a7c017789c6f5d59df7" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.146657 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4x7tx" event={"ID":"23f1f371-2f71-401a-8c20-d400f873f3d1","Type":"ContainerDied","Data":"a188f5b3d52a83b0acd4d42a5d360d3b13553d413f3d99b22eba2b59d3586409"} Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.146705 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a188f5b3d52a83b0acd4d42a5d360d3b13553d413f3d99b22eba2b59d3586409" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.146778 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4x7tx" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.222691 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.232829 4853 scope.go:117] "RemoveContainer" containerID="69b0624fe307eca847b467dd0b71b8e6727b273a237e5035f8fc248f67c08c98" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.234040 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.308732 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:09 crc kubenswrapper[4853]: E1209 17:23:09.309268 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-log" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.309281 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-log" Dec 09 17:23:09 crc kubenswrapper[4853]: E1209 17:23:09.309302 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f1f371-2f71-401a-8c20-d400f873f3d1" containerName="aodh-db-sync" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.309308 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f1f371-2f71-401a-8c20-d400f873f3d1" containerName="aodh-db-sync" Dec 09 17:23:09 crc kubenswrapper[4853]: E1209 17:23:09.309331 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-api" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.309339 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-api" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.309568 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-log" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.309841 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" containerName="nova-api-api" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.309852 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f1f371-2f71-401a-8c20-d400f873f3d1" containerName="aodh-db-sync" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.311236 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.314484 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.326680 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.331557 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-config-data\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.331642 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w52bj\" (UniqueName: \"kubernetes.io/projected/63a0a5e5-59d2-44a9-a9fc-fb8578265225-kube-api-access-w52bj\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.331766 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.331799 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0a5e5-59d2-44a9-a9fc-fb8578265225-logs\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.435793 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0a5e5-59d2-44a9-a9fc-fb8578265225-logs\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.435932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-config-data\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.436428 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0a5e5-59d2-44a9-a9fc-fb8578265225-logs\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.441643 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w52bj\" (UniqueName: \"kubernetes.io/projected/63a0a5e5-59d2-44a9-a9fc-fb8578265225-kube-api-access-w52bj\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.441969 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.454692 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.481947 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-config-data\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.490322 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w52bj\" (UniqueName: \"kubernetes.io/projected/63a0a5e5-59d2-44a9-a9fc-fb8578265225-kube-api-access-w52bj\") pod \"nova-api-0\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.590823 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bcf006-6ed4-49d8-92a7-66f5fb141491" path="/var/lib/kubelet/pods/b8bcf006-6ed4-49d8-92a7-66f5fb141491/volumes" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.607326 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.638472 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.657859 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-config-data\") pod \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.658080 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-combined-ca-bundle\") pod \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.659969 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhbjm\" (UniqueName: \"kubernetes.io/projected/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-kube-api-access-mhbjm\") pod \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\" (UID: \"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca\") " Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.665814 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-kube-api-access-mhbjm" (OuterVolumeSpecName: "kube-api-access-mhbjm") pod "c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" (UID: "c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca"). InnerVolumeSpecName "kube-api-access-mhbjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.726704 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" (UID: "c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.734889 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-config-data" (OuterVolumeSpecName: "config-data") pod "c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" (UID: "c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.772465 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.772507 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:09 crc kubenswrapper[4853]: I1209 17:23:09.772523 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhbjm\" (UniqueName: \"kubernetes.io/projected/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca-kube-api-access-mhbjm\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.161096 4853 generic.go:334] "Generic (PLEG): container finished" podID="c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" containerID="d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b" exitCode=0 Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.161445 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca","Type":"ContainerDied","Data":"d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b"} Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.161475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca","Type":"ContainerDied","Data":"c0864eedaadbcd9b1fc2de947eb0b58f577c9049b8258dfa96fb0c1074ec1e60"} Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.161493 4853 scope.go:117] "RemoveContainer" containerID="d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.161651 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.192704 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.193320 4853 scope.go:117] "RemoveContainer" containerID="d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b" Dec 09 17:23:10 crc kubenswrapper[4853]: E1209 17:23:10.193863 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b\": container with ID starting with d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b not found: ID does not exist" containerID="d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.193911 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b"} err="failed to get container status \"d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b\": rpc error: code = NotFound desc = could not find container \"d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b\": container with ID starting with d9003abb0ef9c85e6299ce3395eac1ba5d7f4772ad1548b97f43f1372a669c4b not found: ID does not exist" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.211998 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.236676 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.248389 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:10 crc kubenswrapper[4853]: E1209 17:23:10.249405 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" containerName="nova-scheduler-scheduler" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.249498 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" containerName="nova-scheduler-scheduler" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.249908 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" containerName="nova-scheduler-scheduler" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.251053 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.259523 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.262447 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.282695 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-config-data\") pod \"nova-scheduler-0\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.284807 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf8lb\" (UniqueName: \"kubernetes.io/projected/74506648-399a-4884-b461-98b2d287223b-kube-api-access-kf8lb\") pod \"nova-scheduler-0\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.285060 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.387663 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf8lb\" (UniqueName: \"kubernetes.io/projected/74506648-399a-4884-b461-98b2d287223b-kube-api-access-kf8lb\") pod \"nova-scheduler-0\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.387718 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.387826 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-config-data\") pod \"nova-scheduler-0\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.393319 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-config-data\") pod \"nova-scheduler-0\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.393336 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.414858 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf8lb\" (UniqueName: \"kubernetes.io/projected/74506648-399a-4884-b461-98b2d287223b-kube-api-access-kf8lb\") pod \"nova-scheduler-0\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:10 crc kubenswrapper[4853]: I1209 17:23:10.583340 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.111516 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:11 crc kubenswrapper[4853]: W1209 17:23:11.114062 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74506648_399a_4884_b461_98b2d287223b.slice/crio-014934c4e203430dffe29f9eed545b960074a810823c6879606460a3bba6364f WatchSource:0}: Error finding container 014934c4e203430dffe29f9eed545b960074a810823c6879606460a3bba6364f: Status 404 returned error can't find the container with id 014934c4e203430dffe29f9eed545b960074a810823c6879606460a3bba6364f Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.180381 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74506648-399a-4884-b461-98b2d287223b","Type":"ContainerStarted","Data":"014934c4e203430dffe29f9eed545b960074a810823c6879606460a3bba6364f"} Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.185413 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63a0a5e5-59d2-44a9-a9fc-fb8578265225","Type":"ContainerStarted","Data":"d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372"} Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.185463 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63a0a5e5-59d2-44a9-a9fc-fb8578265225","Type":"ContainerStarted","Data":"e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40"} Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.185475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63a0a5e5-59d2-44a9-a9fc-fb8578265225","Type":"ContainerStarted","Data":"ba1f53962aac780fd8b09dc25e5d19011f2d91efeb904b9fc2a441b21735182c"} Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.227576 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.227557715 podStartE2EDuration="2.227557715s" podCreationTimestamp="2025-12-09 17:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:11.213657646 +0000 UTC m=+1618.148396848" watchObservedRunningTime="2025-12-09 17:23:11.227557715 +0000 UTC m=+1618.162296897" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.467047 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.472006 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.476051 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.476244 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kq94v" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.476415 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.493669 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.515615 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-scripts\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.515762 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjj8g\" (UniqueName: \"kubernetes.io/projected/2cd9b803-6702-4701-816f-aa54f55b0ddc-kube-api-access-mjj8g\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.515970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-config-data\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.516055 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.586692 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca" path="/var/lib/kubelet/pods/c453631b-0c1a-4bb7-bbb8-14aa5a5d09ca/volumes" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.617860 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-scripts\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.617948 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjj8g\" (UniqueName: \"kubernetes.io/projected/2cd9b803-6702-4701-816f-aa54f55b0ddc-kube-api-access-mjj8g\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.618050 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-config-data\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.618092 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.625175 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.628890 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-scripts\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.637741 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-config-data\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.641355 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjj8g\" (UniqueName: \"kubernetes.io/projected/2cd9b803-6702-4701-816f-aa54f55b0ddc-kube-api-access-mjj8g\") pod \"aodh-0\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " pod="openstack/aodh-0" Dec 09 17:23:11 crc kubenswrapper[4853]: I1209 17:23:11.821320 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 17:23:12 crc kubenswrapper[4853]: I1209 17:23:12.392734 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 09 17:23:12 crc kubenswrapper[4853]: W1209 17:23:12.406328 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cd9b803_6702_4701_816f_aa54f55b0ddc.slice/crio-3eaa50c8259db947c2fc4b99fa1eb22ea36e712a9e5fbd417995a240c60ed759 WatchSource:0}: Error finding container 3eaa50c8259db947c2fc4b99fa1eb22ea36e712a9e5fbd417995a240c60ed759: Status 404 returned error can't find the container with id 3eaa50c8259db947c2fc4b99fa1eb22ea36e712a9e5fbd417995a240c60ed759 Dec 09 17:23:13 crc kubenswrapper[4853]: I1209 17:23:13.215769 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74506648-399a-4884-b461-98b2d287223b","Type":"ContainerStarted","Data":"c08420e2c62aa5e2032f266181d0d8ac615c645c6dc36af2686b89182a483dfc"} Dec 09 17:23:13 crc kubenswrapper[4853]: I1209 17:23:13.217985 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerStarted","Data":"3eaa50c8259db947c2fc4b99fa1eb22ea36e712a9e5fbd417995a240c60ed759"} Dec 09 17:23:13 crc kubenswrapper[4853]: I1209 17:23:13.248702 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.248675815 podStartE2EDuration="3.248675815s" podCreationTimestamp="2025-12-09 17:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:13.234017645 +0000 UTC m=+1620.168756827" watchObservedRunningTime="2025-12-09 17:23:13.248675815 +0000 UTC m=+1620.183415017" Dec 09 17:23:13 crc kubenswrapper[4853]: I1209 17:23:13.795241 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:13 crc kubenswrapper[4853]: I1209 17:23:13.800617 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="ceilometer-central-agent" containerID="cri-o://9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00" gracePeriod=30 Dec 09 17:23:13 crc kubenswrapper[4853]: I1209 17:23:13.800706 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="proxy-httpd" containerID="cri-o://04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d" gracePeriod=30 Dec 09 17:23:13 crc kubenswrapper[4853]: I1209 17:23:13.800842 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="ceilometer-notification-agent" containerID="cri-o://0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3" gracePeriod=30 Dec 09 17:23:13 crc kubenswrapper[4853]: I1209 17:23:13.800834 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="sg-core" containerID="cri-o://bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb" gracePeriod=30 Dec 09 17:23:14 crc kubenswrapper[4853]: I1209 17:23:14.232644 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerStarted","Data":"cdf62787aea3b7548def2cd6bd2bb131b41fdf44d10e160b7df415f72a049c58"} Dec 09 17:23:14 crc kubenswrapper[4853]: I1209 17:23:14.234947 4853 generic.go:334] "Generic (PLEG): container finished" podID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerID="04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d" exitCode=0 Dec 09 17:23:14 crc kubenswrapper[4853]: I1209 17:23:14.234968 4853 generic.go:334] "Generic (PLEG): container finished" podID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerID="bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb" exitCode=2 Dec 09 17:23:14 crc kubenswrapper[4853]: I1209 17:23:14.234977 4853 generic.go:334] "Generic (PLEG): container finished" podID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerID="9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00" exitCode=0 Dec 09 17:23:14 crc kubenswrapper[4853]: I1209 17:23:14.235008 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerDied","Data":"04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d"} Dec 09 17:23:14 crc kubenswrapper[4853]: I1209 17:23:14.235031 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerDied","Data":"bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb"} Dec 09 17:23:14 crc kubenswrapper[4853]: I1209 17:23:14.235044 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerDied","Data":"9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00"} Dec 09 17:23:14 crc kubenswrapper[4853]: E1209 17:23:14.354684 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda792bdbf_43dc_431a_98ae_d066e9f177f0.slice/crio-conmon-9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda792bdbf_43dc_431a_98ae_d066e9f177f0.slice/crio-9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00.scope\": RecentStats: unable to find data in memory cache]" Dec 09 17:23:14 crc kubenswrapper[4853]: I1209 17:23:14.621396 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 09 17:23:15 crc kubenswrapper[4853]: I1209 17:23:15.250673 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerStarted","Data":"be5ebf0517e96b6f6dfb692a07d0a5b8ecbc9d74745a19f3ea71329b1482a591"} Dec 09 17:23:15 crc kubenswrapper[4853]: I1209 17:23:15.583546 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 17:23:15 crc kubenswrapper[4853]: I1209 17:23:15.719574 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.197050 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.278522 4853 generic.go:334] "Generic (PLEG): container finished" podID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerID="0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3" exitCode=0 Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.278643 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerDied","Data":"0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3"} Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.278759 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a792bdbf-43dc-431a-98ae-d066e9f177f0","Type":"ContainerDied","Data":"5a1ad19ca12ed50e8630d62ee0da5ad7592b2ed127c11511df7dd6e40c5dcf8d"} Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.278781 4853 scope.go:117] "RemoveContainer" containerID="04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.278699 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.322726 4853 scope.go:117] "RemoveContainer" containerID="bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.336182 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-config-data\") pod \"a792bdbf-43dc-431a-98ae-d066e9f177f0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.336315 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-run-httpd\") pod \"a792bdbf-43dc-431a-98ae-d066e9f177f0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.336354 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-log-httpd\") pod \"a792bdbf-43dc-431a-98ae-d066e9f177f0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.336552 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9gp\" (UniqueName: \"kubernetes.io/projected/a792bdbf-43dc-431a-98ae-d066e9f177f0-kube-api-access-2l9gp\") pod \"a792bdbf-43dc-431a-98ae-d066e9f177f0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.336694 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-scripts\") pod \"a792bdbf-43dc-431a-98ae-d066e9f177f0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.336734 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-sg-core-conf-yaml\") pod \"a792bdbf-43dc-431a-98ae-d066e9f177f0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.336759 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-combined-ca-bundle\") pod \"a792bdbf-43dc-431a-98ae-d066e9f177f0\" (UID: \"a792bdbf-43dc-431a-98ae-d066e9f177f0\") " Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.337462 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a792bdbf-43dc-431a-98ae-d066e9f177f0" (UID: "a792bdbf-43dc-431a-98ae-d066e9f177f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.337573 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.338016 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a792bdbf-43dc-431a-98ae-d066e9f177f0" (UID: "a792bdbf-43dc-431a-98ae-d066e9f177f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.342674 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a792bdbf-43dc-431a-98ae-d066e9f177f0-kube-api-access-2l9gp" (OuterVolumeSpecName: "kube-api-access-2l9gp") pod "a792bdbf-43dc-431a-98ae-d066e9f177f0" (UID: "a792bdbf-43dc-431a-98ae-d066e9f177f0"). InnerVolumeSpecName "kube-api-access-2l9gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.343070 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-scripts" (OuterVolumeSpecName: "scripts") pod "a792bdbf-43dc-431a-98ae-d066e9f177f0" (UID: "a792bdbf-43dc-431a-98ae-d066e9f177f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.366691 4853 scope.go:117] "RemoveContainer" containerID="0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.385119 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a792bdbf-43dc-431a-98ae-d066e9f177f0" (UID: "a792bdbf-43dc-431a-98ae-d066e9f177f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.439915 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a792bdbf-43dc-431a-98ae-d066e9f177f0-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.439942 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l9gp\" (UniqueName: \"kubernetes.io/projected/a792bdbf-43dc-431a-98ae-d066e9f177f0-kube-api-access-2l9gp\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.439953 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.439961 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.441881 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a792bdbf-43dc-431a-98ae-d066e9f177f0" (UID: "a792bdbf-43dc-431a-98ae-d066e9f177f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.489369 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-config-data" (OuterVolumeSpecName: "config-data") pod "a792bdbf-43dc-431a-98ae-d066e9f177f0" (UID: "a792bdbf-43dc-431a-98ae-d066e9f177f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.542496 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.542532 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a792bdbf-43dc-431a-98ae-d066e9f177f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.628102 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.644828 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.663976 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:16 crc kubenswrapper[4853]: E1209 17:23:16.664533 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="ceilometer-notification-agent" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.664555 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="ceilometer-notification-agent" Dec 09 17:23:16 crc kubenswrapper[4853]: E1209 17:23:16.664575 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="sg-core" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.664583 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="sg-core" Dec 09 17:23:16 crc kubenswrapper[4853]: E1209 17:23:16.664606 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="ceilometer-central-agent" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.664612 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="ceilometer-central-agent" Dec 09 17:23:16 crc kubenswrapper[4853]: E1209 17:23:16.664628 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="proxy-httpd" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.664633 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="proxy-httpd" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.664880 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="proxy-httpd" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.664899 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="ceilometer-central-agent" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.664913 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="sg-core" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.664929 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" containerName="ceilometer-notification-agent" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.736645 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.736792 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.738999 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.739524 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.851295 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-run-httpd\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.851350 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-scripts\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.851541 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.851682 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.851772 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w487\" (UniqueName: \"kubernetes.io/projected/953bd5a5-deed-4bcb-a68a-9f782ef884df-kube-api-access-4w487\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.852170 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-log-httpd\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.852447 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-config-data\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.954541 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-log-httpd\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.954647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-config-data\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.954725 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-run-httpd\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.954752 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-scripts\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.954821 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.954871 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.954939 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w487\" (UniqueName: \"kubernetes.io/projected/953bd5a5-deed-4bcb-a68a-9f782ef884df-kube-api-access-4w487\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.955571 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-run-httpd\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.955866 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-log-httpd\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.959485 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-scripts\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.961953 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.962109 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.962663 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-config-data\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:16 crc kubenswrapper[4853]: I1209 17:23:16.977236 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w487\" (UniqueName: \"kubernetes.io/projected/953bd5a5-deed-4bcb-a68a-9f782ef884df-kube-api-access-4w487\") pod \"ceilometer-0\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " pod="openstack/ceilometer-0" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.000715 4853 scope.go:117] "RemoveContainer" containerID="9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.053473 4853 scope.go:117] "RemoveContainer" containerID="04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d" Dec 09 17:23:17 crc kubenswrapper[4853]: E1209 17:23:17.053965 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d\": container with ID starting with 04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d not found: ID does not exist" containerID="04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.053998 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d"} err="failed to get container status \"04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d\": rpc error: code = NotFound desc = could not find container \"04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d\": container with ID starting with 04a0e35d875ba30cc776c705a27644f33e17e8b73e5a6c845de1f478cccfb29d not found: ID does not exist" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.054020 4853 scope.go:117] "RemoveContainer" containerID="bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb" Dec 09 17:23:17 crc kubenswrapper[4853]: E1209 17:23:17.054674 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb\": container with ID starting with bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb not found: ID does not exist" containerID="bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.054728 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb"} err="failed to get container status \"bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb\": rpc error: code = NotFound desc = could not find container \"bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb\": container with ID starting with bd28d2c74b02cc40adfa4f9aac28378e0f4b2b288d05a8926ca63721a8b001cb not found: ID does not exist" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.054761 4853 scope.go:117] "RemoveContainer" containerID="0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3" Dec 09 17:23:17 crc kubenswrapper[4853]: E1209 17:23:17.055088 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3\": container with ID starting with 0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3 not found: ID does not exist" containerID="0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.055117 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3"} err="failed to get container status \"0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3\": rpc error: code = NotFound desc = could not find container \"0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3\": container with ID starting with 0d2558e05cb1bb03289412cb896b2a5cc539e711bfea6496b27aee12a4a00cc3 not found: ID does not exist" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.055168 4853 scope.go:117] "RemoveContainer" containerID="9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00" Dec 09 17:23:17 crc kubenswrapper[4853]: E1209 17:23:17.055429 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00\": container with ID starting with 9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00 not found: ID does not exist" containerID="9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.055455 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00"} err="failed to get container status \"9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00\": rpc error: code = NotFound desc = could not find container \"9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00\": container with ID starting with 9c2989fbbae64cbd4317527372fa07b63b8ea53570c8ebf0f0b7cb53738efd00 not found: ID does not exist" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.060095 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.546723 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:17 crc kubenswrapper[4853]: W1209 17:23:17.549321 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953bd5a5_deed_4bcb_a68a_9f782ef884df.slice/crio-fcaa0ff0d0d43f2dcc9c504f795220c7e3b47fc38daef22e9ef61d8dd3bdc404 WatchSource:0}: Error finding container fcaa0ff0d0d43f2dcc9c504f795220c7e3b47fc38daef22e9ef61d8dd3bdc404: Status 404 returned error can't find the container with id fcaa0ff0d0d43f2dcc9c504f795220c7e3b47fc38daef22e9ef61d8dd3bdc404 Dec 09 17:23:17 crc kubenswrapper[4853]: I1209 17:23:17.590658 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a792bdbf-43dc-431a-98ae-d066e9f177f0" path="/var/lib/kubelet/pods/a792bdbf-43dc-431a-98ae-d066e9f177f0/volumes" Dec 09 17:23:18 crc kubenswrapper[4853]: I1209 17:23:18.311770 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerStarted","Data":"bf2689199ab5e6e2f1529ee37f6c2d6a879ac6f5f7040a8801874c1deacce110"} Dec 09 17:23:18 crc kubenswrapper[4853]: I1209 17:23:18.313417 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerStarted","Data":"fcaa0ff0d0d43f2dcc9c504f795220c7e3b47fc38daef22e9ef61d8dd3bdc404"} Dec 09 17:23:19 crc kubenswrapper[4853]: I1209 17:23:19.328539 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerStarted","Data":"27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e"} Dec 09 17:23:19 crc kubenswrapper[4853]: I1209 17:23:19.639791 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 17:23:19 crc kubenswrapper[4853]: I1209 17:23:19.640160 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.345915 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerStarted","Data":"64bf07429d0ea4be42944bebc322ad19165e2632f662cdeea3c343ab136d86db"} Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.346367 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-api" containerID="cri-o://cdf62787aea3b7548def2cd6bd2bb131b41fdf44d10e160b7df415f72a049c58" gracePeriod=30 Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.346943 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-listener" containerID="cri-o://64bf07429d0ea4be42944bebc322ad19165e2632f662cdeea3c343ab136d86db" gracePeriod=30 Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.346989 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-notifier" containerID="cri-o://bf2689199ab5e6e2f1529ee37f6c2d6a879ac6f5f7040a8801874c1deacce110" gracePeriod=30 Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.347018 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-evaluator" containerID="cri-o://be5ebf0517e96b6f6dfb692a07d0a5b8ecbc9d74745a19f3ea71329b1482a591" gracePeriod=30 Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.360711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerStarted","Data":"2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122"} Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.385462 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.263832963 podStartE2EDuration="9.38543793s" podCreationTimestamp="2025-12-09 17:23:11 +0000 UTC" firstStartedPulling="2025-12-09 17:23:12.412933707 +0000 UTC m=+1619.347672889" lastFinishedPulling="2025-12-09 17:23:19.534538664 +0000 UTC m=+1626.469277856" observedRunningTime="2025-12-09 17:23:20.367131667 +0000 UTC m=+1627.301870859" watchObservedRunningTime="2025-12-09 17:23:20.38543793 +0000 UTC m=+1627.320177112" Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.583777 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.646912 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.723743 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.239:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 17:23:20 crc kubenswrapper[4853]: I1209 17:23:20.724131 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.239:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 17:23:21 crc kubenswrapper[4853]: I1209 17:23:21.375228 4853 generic.go:334] "Generic (PLEG): container finished" podID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerID="be5ebf0517e96b6f6dfb692a07d0a5b8ecbc9d74745a19f3ea71329b1482a591" exitCode=0 Dec 09 17:23:21 crc kubenswrapper[4853]: I1209 17:23:21.375579 4853 generic.go:334] "Generic (PLEG): container finished" podID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerID="cdf62787aea3b7548def2cd6bd2bb131b41fdf44d10e160b7df415f72a049c58" exitCode=0 Dec 09 17:23:21 crc kubenswrapper[4853]: I1209 17:23:21.375276 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerDied","Data":"be5ebf0517e96b6f6dfb692a07d0a5b8ecbc9d74745a19f3ea71329b1482a591"} Dec 09 17:23:21 crc kubenswrapper[4853]: I1209 17:23:21.375690 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerDied","Data":"cdf62787aea3b7548def2cd6bd2bb131b41fdf44d10e160b7df415f72a049c58"} Dec 09 17:23:21 crc kubenswrapper[4853]: I1209 17:23:21.378331 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerStarted","Data":"ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0"} Dec 09 17:23:21 crc kubenswrapper[4853]: I1209 17:23:21.417273 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 17:23:23 crc kubenswrapper[4853]: I1209 17:23:23.400505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerStarted","Data":"041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388"} Dec 09 17:23:23 crc kubenswrapper[4853]: I1209 17:23:23.401286 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:23:23 crc kubenswrapper[4853]: I1209 17:23:23.428826 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.631862286 podStartE2EDuration="7.42880665s" podCreationTimestamp="2025-12-09 17:23:16 +0000 UTC" firstStartedPulling="2025-12-09 17:23:17.553038572 +0000 UTC m=+1624.487777754" lastFinishedPulling="2025-12-09 17:23:22.349982946 +0000 UTC m=+1629.284722118" observedRunningTime="2025-12-09 17:23:23.425954525 +0000 UTC m=+1630.360693717" watchObservedRunningTime="2025-12-09 17:23:23.42880665 +0000 UTC m=+1630.363545832" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.482343 4853 generic.go:334] "Generic (PLEG): container finished" podID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerID="ca10fd94240a74555b9fd6cd6d00012213f533f7ee0947ec37d6b5504fc45894" exitCode=137 Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.482708 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f342c50d-793c-4238-b9e4-de14f3473b4b","Type":"ContainerDied","Data":"ca10fd94240a74555b9fd6cd6d00012213f533f7ee0947ec37d6b5504fc45894"} Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.482937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f342c50d-793c-4238-b9e4-de14f3473b4b","Type":"ContainerDied","Data":"1eaef7803f35becc0e5f80a4ca12068a168c303d9c5c207f0128160edcf69b84"} Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.482956 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eaef7803f35becc0e5f80a4ca12068a168c303d9c5c207f0128160edcf69b84" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.489722 4853 generic.go:334] "Generic (PLEG): container finished" podID="dbb040fb-b035-41b8-82c7-b94858c83360" containerID="23fd1afd0437cfc7a7f24b768805570a765bf54cc4ebed7e501aa9ee970c576a" exitCode=137 Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.489789 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dbb040fb-b035-41b8-82c7-b94858c83360","Type":"ContainerDied","Data":"23fd1afd0437cfc7a7f24b768805570a765bf54cc4ebed7e501aa9ee970c576a"} Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.489819 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dbb040fb-b035-41b8-82c7-b94858c83360","Type":"ContainerDied","Data":"64c04e7d047d59b71e3bf3f17cb463c99b536e559f7e72d637644cda3f2a9393"} Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.489869 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c04e7d047d59b71e3bf3f17cb463c99b536e559f7e72d637644cda3f2a9393" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.571423 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.573870 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.593312 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.593373 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.670879 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f342c50d-793c-4238-b9e4-de14f3473b4b-logs\") pod \"f342c50d-793c-4238-b9e4-de14f3473b4b\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.670966 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-combined-ca-bundle\") pod \"dbb040fb-b035-41b8-82c7-b94858c83360\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.671088 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-config-data\") pod \"f342c50d-793c-4238-b9e4-de14f3473b4b\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.671156 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-config-data\") pod \"dbb040fb-b035-41b8-82c7-b94858c83360\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.671175 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-combined-ca-bundle\") pod \"f342c50d-793c-4238-b9e4-de14f3473b4b\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.671381 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5hcd\" (UniqueName: \"kubernetes.io/projected/dbb040fb-b035-41b8-82c7-b94858c83360-kube-api-access-w5hcd\") pod \"dbb040fb-b035-41b8-82c7-b94858c83360\" (UID: \"dbb040fb-b035-41b8-82c7-b94858c83360\") " Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.671425 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfms2\" (UniqueName: \"kubernetes.io/projected/f342c50d-793c-4238-b9e4-de14f3473b4b-kube-api-access-pfms2\") pod \"f342c50d-793c-4238-b9e4-de14f3473b4b\" (UID: \"f342c50d-793c-4238-b9e4-de14f3473b4b\") " Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.671508 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f342c50d-793c-4238-b9e4-de14f3473b4b-logs" (OuterVolumeSpecName: "logs") pod "f342c50d-793c-4238-b9e4-de14f3473b4b" (UID: "f342c50d-793c-4238-b9e4-de14f3473b4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.672233 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f342c50d-793c-4238-b9e4-de14f3473b4b-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.679126 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f342c50d-793c-4238-b9e4-de14f3473b4b-kube-api-access-pfms2" (OuterVolumeSpecName: "kube-api-access-pfms2") pod "f342c50d-793c-4238-b9e4-de14f3473b4b" (UID: "f342c50d-793c-4238-b9e4-de14f3473b4b"). InnerVolumeSpecName "kube-api-access-pfms2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.681490 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb040fb-b035-41b8-82c7-b94858c83360-kube-api-access-w5hcd" (OuterVolumeSpecName: "kube-api-access-w5hcd") pod "dbb040fb-b035-41b8-82c7-b94858c83360" (UID: "dbb040fb-b035-41b8-82c7-b94858c83360"). InnerVolumeSpecName "kube-api-access-w5hcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.708216 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-config-data" (OuterVolumeSpecName: "config-data") pod "dbb040fb-b035-41b8-82c7-b94858c83360" (UID: "dbb040fb-b035-41b8-82c7-b94858c83360"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.718265 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-config-data" (OuterVolumeSpecName: "config-data") pod "f342c50d-793c-4238-b9e4-de14f3473b4b" (UID: "f342c50d-793c-4238-b9e4-de14f3473b4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.729756 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbb040fb-b035-41b8-82c7-b94858c83360" (UID: "dbb040fb-b035-41b8-82c7-b94858c83360"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.736476 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f342c50d-793c-4238-b9e4-de14f3473b4b" (UID: "f342c50d-793c-4238-b9e4-de14f3473b4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.774473 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5hcd\" (UniqueName: \"kubernetes.io/projected/dbb040fb-b035-41b8-82c7-b94858c83360-kube-api-access-w5hcd\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.774740 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfms2\" (UniqueName: \"kubernetes.io/projected/f342c50d-793c-4238-b9e4-de14f3473b4b-kube-api-access-pfms2\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.774803 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.774891 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.774955 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb040fb-b035-41b8-82c7-b94858c83360-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:28 crc kubenswrapper[4853]: I1209 17:23:28.775020 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f342c50d-793c-4238-b9e4-de14f3473b4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.499165 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.499225 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.536286 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.548069 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.560406 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 17:23:29 crc kubenswrapper[4853]: E1209 17:23:29.563825 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerName="nova-metadata-log" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.563857 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerName="nova-metadata-log" Dec 09 17:23:29 crc kubenswrapper[4853]: E1209 17:23:29.563881 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb040fb-b035-41b8-82c7-b94858c83360" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.563888 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb040fb-b035-41b8-82c7-b94858c83360" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 17:23:29 crc kubenswrapper[4853]: E1209 17:23:29.563914 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerName="nova-metadata-metadata" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.563920 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerName="nova-metadata-metadata" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.564168 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerName="nova-metadata-metadata" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.564201 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f342c50d-793c-4238-b9e4-de14f3473b4b" containerName="nova-metadata-log" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.564222 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb040fb-b035-41b8-82c7-b94858c83360" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.565030 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.568581 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.568673 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.568838 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.592859 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb040fb-b035-41b8-82c7-b94858c83360" path="/var/lib/kubelet/pods/dbb040fb-b035-41b8-82c7-b94858c83360/volumes" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.607752 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.628776 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.641784 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.649450 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.649911 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.650694 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.657361 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.659023 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.661687 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.667006 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.667387 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.695667 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.697165 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.697261 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.697502 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.697554 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.697577 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjk6\" (UniqueName: \"kubernetes.io/projected/49c74f06-d991-4be0-8e3f-4c76350361cd-kube-api-access-zdjk6\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799344 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799500 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799527 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-config-data\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799582 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6daa3c8b-a296-4ed8-9556-7653e9f59f44-logs\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799647 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlqhl\" (UniqueName: \"kubernetes.io/projected/6daa3c8b-a296-4ed8-9556-7653e9f59f44-kube-api-access-jlqhl\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799719 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799759 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799784 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjk6\" (UniqueName: \"kubernetes.io/projected/49c74f06-d991-4be0-8e3f-4c76350361cd-kube-api-access-zdjk6\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799822 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.799868 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.805093 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.805445 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.806117 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.813993 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/49c74f06-d991-4be0-8e3f-4c76350361cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.816199 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjk6\" (UniqueName: \"kubernetes.io/projected/49c74f06-d991-4be0-8e3f-4c76350361cd-kube-api-access-zdjk6\") pod \"nova-cell1-novncproxy-0\" (UID: \"49c74f06-d991-4be0-8e3f-4c76350361cd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.895153 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.901576 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.901628 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-config-data\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.901684 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6daa3c8b-a296-4ed8-9556-7653e9f59f44-logs\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.901721 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlqhl\" (UniqueName: \"kubernetes.io/projected/6daa3c8b-a296-4ed8-9556-7653e9f59f44-kube-api-access-jlqhl\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.901782 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.902736 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6daa3c8b-a296-4ed8-9556-7653e9f59f44-logs\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.905420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-config-data\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.908455 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.909636 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.927759 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlqhl\" (UniqueName: \"kubernetes.io/projected/6daa3c8b-a296-4ed8-9556-7653e9f59f44-kube-api-access-jlqhl\") pod \"nova-metadata-0\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " pod="openstack/nova-metadata-0" Dec 09 17:23:29 crc kubenswrapper[4853]: I1209 17:23:29.980286 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.429177 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.513918 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"49c74f06-d991-4be0-8e3f-4c76350361cd","Type":"ContainerStarted","Data":"e381094af84f8a7e34217133d8e1ac66dc6d1bf83a444ae4850c2a68889dd410"} Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.514816 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.524081 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.555710 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:30 crc kubenswrapper[4853]: W1209 17:23:30.558647 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6daa3c8b_a296_4ed8_9556_7653e9f59f44.slice/crio-c4261e18df0631488a7734bff547cdfc84d675b2a756d3ebaa616844293e1bcc WatchSource:0}: Error finding container c4261e18df0631488a7734bff547cdfc84d675b2a756d3ebaa616844293e1bcc: Status 404 returned error can't find the container with id c4261e18df0631488a7734bff547cdfc84d675b2a756d3ebaa616844293e1bcc Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.724835 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-47wqg"] Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.738339 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.804113 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-47wqg"] Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.885855 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.886209 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.886259 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwfs\" (UniqueName: \"kubernetes.io/projected/9cea25af-f35d-42ec-accb-ef519f796dc8-kube-api-access-jhwfs\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.886332 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.886354 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.886440 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.988749 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.988845 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.988892 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwfs\" (UniqueName: \"kubernetes.io/projected/9cea25af-f35d-42ec-accb-ef519f796dc8-kube-api-access-jhwfs\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.989019 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.989044 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.989137 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.989893 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.990160 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.990356 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.991647 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:30 crc kubenswrapper[4853]: I1209 17:23:30.992937 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.010391 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwfs\" (UniqueName: \"kubernetes.io/projected/9cea25af-f35d-42ec-accb-ef519f796dc8-kube-api-access-jhwfs\") pod \"dnsmasq-dns-6b7bbf7cf9-47wqg\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.120191 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.528866 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"49c74f06-d991-4be0-8e3f-4c76350361cd","Type":"ContainerStarted","Data":"041435b81570a0da5eea864a3c8a9636402d1116af369c15943074cdd596836d"} Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.539505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6daa3c8b-a296-4ed8-9556-7653e9f59f44","Type":"ContainerStarted","Data":"71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439"} Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.539588 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6daa3c8b-a296-4ed8-9556-7653e9f59f44","Type":"ContainerStarted","Data":"afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff"} Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.539622 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6daa3c8b-a296-4ed8-9556-7653e9f59f44","Type":"ContainerStarted","Data":"c4261e18df0631488a7734bff547cdfc84d675b2a756d3ebaa616844293e1bcc"} Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.568975 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.568959017 podStartE2EDuration="2.568959017s" podCreationTimestamp="2025-12-09 17:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:31.548632452 +0000 UTC m=+1638.483371634" watchObservedRunningTime="2025-12-09 17:23:31.568959017 +0000 UTC m=+1638.503698199" Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.582410 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f342c50d-793c-4238-b9e4-de14f3473b4b" path="/var/lib/kubelet/pods/f342c50d-793c-4238-b9e4-de14f3473b4b/volumes" Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.595947 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.595926367 podStartE2EDuration="2.595926367s" podCreationTimestamp="2025-12-09 17:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:31.588218684 +0000 UTC m=+1638.522957856" watchObservedRunningTime="2025-12-09 17:23:31.595926367 +0000 UTC m=+1638.530665559" Dec 09 17:23:31 crc kubenswrapper[4853]: I1209 17:23:31.751044 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-47wqg"] Dec 09 17:23:32 crc kubenswrapper[4853]: I1209 17:23:32.569833 4853 generic.go:334] "Generic (PLEG): container finished" podID="9cea25af-f35d-42ec-accb-ef519f796dc8" containerID="c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5" exitCode=0 Dec 09 17:23:32 crc kubenswrapper[4853]: I1209 17:23:32.569925 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" event={"ID":"9cea25af-f35d-42ec-accb-ef519f796dc8","Type":"ContainerDied","Data":"c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5"} Dec 09 17:23:32 crc kubenswrapper[4853]: I1209 17:23:32.570159 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" event={"ID":"9cea25af-f35d-42ec-accb-ef519f796dc8","Type":"ContainerStarted","Data":"fee9b909063a2c67c41013e3a013a014a1f49e556c40eac89d6dea90588085f7"} Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.396958 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.397510 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="ceilometer-central-agent" containerID="cri-o://27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e" gracePeriod=30 Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.397634 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="sg-core" containerID="cri-o://ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0" gracePeriod=30 Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.397689 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="proxy-httpd" containerID="cri-o://041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388" gracePeriod=30 Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.397643 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="ceilometer-notification-agent" containerID="cri-o://2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122" gracePeriod=30 Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.407036 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.242:3000/\": EOF" Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.588171 4853 generic.go:334] "Generic (PLEG): container finished" podID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerID="ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0" exitCode=2 Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.589514 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerDied","Data":"ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0"} Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.593837 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" event={"ID":"9cea25af-f35d-42ec-accb-ef519f796dc8","Type":"ContainerStarted","Data":"6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa"} Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.595107 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.620908 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" podStartSLOduration=3.620886433 podStartE2EDuration="3.620886433s" podCreationTimestamp="2025-12-09 17:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:33.61012834 +0000 UTC m=+1640.544867522" watchObservedRunningTime="2025-12-09 17:23:33.620886433 +0000 UTC m=+1640.555625615" Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.678005 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.678931 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-log" containerID="cri-o://e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40" gracePeriod=30 Dec 09 17:23:33 crc kubenswrapper[4853]: I1209 17:23:33.679147 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-api" containerID="cri-o://d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372" gracePeriod=30 Dec 09 17:23:34 crc kubenswrapper[4853]: I1209 17:23:34.611444 4853 generic.go:334] "Generic (PLEG): container finished" podID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerID="041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388" exitCode=0 Dec 09 17:23:34 crc kubenswrapper[4853]: I1209 17:23:34.611788 4853 generic.go:334] "Generic (PLEG): container finished" podID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerID="27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e" exitCode=0 Dec 09 17:23:34 crc kubenswrapper[4853]: I1209 17:23:34.611842 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerDied","Data":"041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388"} Dec 09 17:23:34 crc kubenswrapper[4853]: I1209 17:23:34.611868 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerDied","Data":"27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e"} Dec 09 17:23:34 crc kubenswrapper[4853]: I1209 17:23:34.614818 4853 generic.go:334] "Generic (PLEG): container finished" podID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerID="e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40" exitCode=143 Dec 09 17:23:34 crc kubenswrapper[4853]: I1209 17:23:34.615803 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63a0a5e5-59d2-44a9-a9fc-fb8578265225","Type":"ContainerDied","Data":"e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40"} Dec 09 17:23:34 crc kubenswrapper[4853]: I1209 17:23:34.895977 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:34 crc kubenswrapper[4853]: I1209 17:23:34.981472 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 17:23:34 crc kubenswrapper[4853]: I1209 17:23:34.981788 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.180947 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.315816 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w487\" (UniqueName: \"kubernetes.io/projected/953bd5a5-deed-4bcb-a68a-9f782ef884df-kube-api-access-4w487\") pod \"953bd5a5-deed-4bcb-a68a-9f782ef884df\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.316103 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-scripts\") pod \"953bd5a5-deed-4bcb-a68a-9f782ef884df\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.316146 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-log-httpd\") pod \"953bd5a5-deed-4bcb-a68a-9f782ef884df\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.316277 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-combined-ca-bundle\") pod \"953bd5a5-deed-4bcb-a68a-9f782ef884df\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.316305 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-sg-core-conf-yaml\") pod \"953bd5a5-deed-4bcb-a68a-9f782ef884df\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.316353 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-run-httpd\") pod \"953bd5a5-deed-4bcb-a68a-9f782ef884df\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.316437 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-config-data\") pod \"953bd5a5-deed-4bcb-a68a-9f782ef884df\" (UID: \"953bd5a5-deed-4bcb-a68a-9f782ef884df\") " Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.317085 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "953bd5a5-deed-4bcb-a68a-9f782ef884df" (UID: "953bd5a5-deed-4bcb-a68a-9f782ef884df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.317701 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "953bd5a5-deed-4bcb-a68a-9f782ef884df" (UID: "953bd5a5-deed-4bcb-a68a-9f782ef884df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.325329 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953bd5a5-deed-4bcb-a68a-9f782ef884df-kube-api-access-4w487" (OuterVolumeSpecName: "kube-api-access-4w487") pod "953bd5a5-deed-4bcb-a68a-9f782ef884df" (UID: "953bd5a5-deed-4bcb-a68a-9f782ef884df"). InnerVolumeSpecName "kube-api-access-4w487". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.331863 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-scripts" (OuterVolumeSpecName: "scripts") pod "953bd5a5-deed-4bcb-a68a-9f782ef884df" (UID: "953bd5a5-deed-4bcb-a68a-9f782ef884df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.352724 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "953bd5a5-deed-4bcb-a68a-9f782ef884df" (UID: "953bd5a5-deed-4bcb-a68a-9f782ef884df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.419008 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.419043 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.419053 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.419061 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953bd5a5-deed-4bcb-a68a-9f782ef884df-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.419069 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w487\" (UniqueName: \"kubernetes.io/projected/953bd5a5-deed-4bcb-a68a-9f782ef884df-kube-api-access-4w487\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.442328 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "953bd5a5-deed-4bcb-a68a-9f782ef884df" (UID: "953bd5a5-deed-4bcb-a68a-9f782ef884df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.456504 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-config-data" (OuterVolumeSpecName: "config-data") pod "953bd5a5-deed-4bcb-a68a-9f782ef884df" (UID: "953bd5a5-deed-4bcb-a68a-9f782ef884df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.521150 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.521204 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953bd5a5-deed-4bcb-a68a-9f782ef884df-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.639924 4853 generic.go:334] "Generic (PLEG): container finished" podID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerID="2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122" exitCode=0 Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.639972 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerDied","Data":"2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122"} Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.639989 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.640012 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953bd5a5-deed-4bcb-a68a-9f782ef884df","Type":"ContainerDied","Data":"fcaa0ff0d0d43f2dcc9c504f795220c7e3b47fc38daef22e9ef61d8dd3bdc404"} Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.640033 4853 scope.go:117] "RemoveContainer" containerID="041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.675554 4853 scope.go:117] "RemoveContainer" containerID="ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.680947 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.720340 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.727666 4853 scope.go:117] "RemoveContainer" containerID="2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.730128 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:36 crc kubenswrapper[4853]: E1209 17:23:36.730979 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="ceilometer-notification-agent" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.730994 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="ceilometer-notification-agent" Dec 09 17:23:36 crc kubenswrapper[4853]: E1209 17:23:36.731010 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="proxy-httpd" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.731017 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="proxy-httpd" Dec 09 17:23:36 crc kubenswrapper[4853]: E1209 17:23:36.731037 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="ceilometer-central-agent" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.731043 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="ceilometer-central-agent" Dec 09 17:23:36 crc kubenswrapper[4853]: E1209 17:23:36.731056 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="sg-core" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.731062 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="sg-core" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.731296 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="proxy-httpd" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.731310 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="sg-core" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.731321 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="ceilometer-central-agent" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.731334 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" containerName="ceilometer-notification-agent" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.733744 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.737763 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.738344 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.742810 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.790714 4853 scope.go:117] "RemoveContainer" containerID="27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.830628 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.830688 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-config-data\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.830738 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s27l5\" (UniqueName: \"kubernetes.io/projected/0703aded-0b68-4295-82b2-675c69e88c1f-kube-api-access-s27l5\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.830785 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-log-httpd\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.830835 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-run-httpd\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.830912 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.830962 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-scripts\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.845698 4853 scope.go:117] "RemoveContainer" containerID="041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388" Dec 09 17:23:36 crc kubenswrapper[4853]: E1209 17:23:36.846117 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388\": container with ID starting with 041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388 not found: ID does not exist" containerID="041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.846143 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388"} err="failed to get container status \"041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388\": rpc error: code = NotFound desc = could not find container \"041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388\": container with ID starting with 041babf180e58e14b82e4957805fe780ce8b2af6dd13234b993e309fa28d4388 not found: ID does not exist" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.846163 4853 scope.go:117] "RemoveContainer" containerID="ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0" Dec 09 17:23:36 crc kubenswrapper[4853]: E1209 17:23:36.846671 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0\": container with ID starting with ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0 not found: ID does not exist" containerID="ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.846708 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0"} err="failed to get container status \"ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0\": rpc error: code = NotFound desc = could not find container \"ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0\": container with ID starting with ae71c217283eb2ba21231bea5dcbd05b7f69161606fcfb761ae2a476c61abec0 not found: ID does not exist" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.846736 4853 scope.go:117] "RemoveContainer" containerID="2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122" Dec 09 17:23:36 crc kubenswrapper[4853]: E1209 17:23:36.847501 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122\": container with ID starting with 2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122 not found: ID does not exist" containerID="2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.847532 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122"} err="failed to get container status \"2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122\": rpc error: code = NotFound desc = could not find container \"2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122\": container with ID starting with 2e08d2fea5d4e2aab415ea882b7a2539adbb0bb251dc17ed2b3a1bcd95c5d122 not found: ID does not exist" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.847577 4853 scope.go:117] "RemoveContainer" containerID="27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e" Dec 09 17:23:36 crc kubenswrapper[4853]: E1209 17:23:36.848073 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e\": container with ID starting with 27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e not found: ID does not exist" containerID="27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.848098 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e"} err="failed to get container status \"27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e\": rpc error: code = NotFound desc = could not find container \"27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e\": container with ID starting with 27e1cb2b954de1b10e62935ac47ff6577ef0e10596e8331ffeb79af798c8630e not found: ID does not exist" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.933499 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-scripts\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.934865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.935023 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-config-data\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.935220 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s27l5\" (UniqueName: \"kubernetes.io/projected/0703aded-0b68-4295-82b2-675c69e88c1f-kube-api-access-s27l5\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.935412 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-log-httpd\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.935694 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-run-httpd\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.936017 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.936539 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-run-httpd\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.937207 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-log-httpd\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.940824 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.941455 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-config-data\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.944275 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.953812 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-scripts\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:36 crc kubenswrapper[4853]: I1209 17:23:36.957434 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s27l5\" (UniqueName: \"kubernetes.io/projected/0703aded-0b68-4295-82b2-675c69e88c1f-kube-api-access-s27l5\") pod \"ceilometer-0\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " pod="openstack/ceilometer-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.044285 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.275787 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.452338 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-config-data\") pod \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.452591 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w52bj\" (UniqueName: \"kubernetes.io/projected/63a0a5e5-59d2-44a9-a9fc-fb8578265225-kube-api-access-w52bj\") pod \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.452637 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-combined-ca-bundle\") pod \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.452655 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0a5e5-59d2-44a9-a9fc-fb8578265225-logs\") pod \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\" (UID: \"63a0a5e5-59d2-44a9-a9fc-fb8578265225\") " Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.453676 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a0a5e5-59d2-44a9-a9fc-fb8578265225-logs" (OuterVolumeSpecName: "logs") pod "63a0a5e5-59d2-44a9-a9fc-fb8578265225" (UID: "63a0a5e5-59d2-44a9-a9fc-fb8578265225"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.463806 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a0a5e5-59d2-44a9-a9fc-fb8578265225-kube-api-access-w52bj" (OuterVolumeSpecName: "kube-api-access-w52bj") pod "63a0a5e5-59d2-44a9-a9fc-fb8578265225" (UID: "63a0a5e5-59d2-44a9-a9fc-fb8578265225"). InnerVolumeSpecName "kube-api-access-w52bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.486051 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63a0a5e5-59d2-44a9-a9fc-fb8578265225" (UID: "63a0a5e5-59d2-44a9-a9fc-fb8578265225"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.492791 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-config-data" (OuterVolumeSpecName: "config-data") pod "63a0a5e5-59d2-44a9-a9fc-fb8578265225" (UID: "63a0a5e5-59d2-44a9-a9fc-fb8578265225"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:37 crc kubenswrapper[4853]: W1209 17:23:37.539930 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0703aded_0b68_4295_82b2_675c69e88c1f.slice/crio-f8df74ffa9a15ab6e35c3d33087292cd2230102ca094ce59fd95f87bf60c2a72 WatchSource:0}: Error finding container f8df74ffa9a15ab6e35c3d33087292cd2230102ca094ce59fd95f87bf60c2a72: Status 404 returned error can't find the container with id f8df74ffa9a15ab6e35c3d33087292cd2230102ca094ce59fd95f87bf60c2a72 Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.540501 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.555708 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.555748 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w52bj\" (UniqueName: \"kubernetes.io/projected/63a0a5e5-59d2-44a9-a9fc-fb8578265225-kube-api-access-w52bj\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.555763 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a0a5e5-59d2-44a9-a9fc-fb8578265225-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.555775 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a0a5e5-59d2-44a9-a9fc-fb8578265225-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.591292 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953bd5a5-deed-4bcb-a68a-9f782ef884df" path="/var/lib/kubelet/pods/953bd5a5-deed-4bcb-a68a-9f782ef884df/volumes" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.655632 4853 generic.go:334] "Generic (PLEG): container finished" podID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerID="d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372" exitCode=0 Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.655922 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63a0a5e5-59d2-44a9-a9fc-fb8578265225","Type":"ContainerDied","Data":"d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372"} Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.656036 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63a0a5e5-59d2-44a9-a9fc-fb8578265225","Type":"ContainerDied","Data":"ba1f53962aac780fd8b09dc25e5d19011f2d91efeb904b9fc2a441b21735182c"} Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.656181 4853 scope.go:117] "RemoveContainer" containerID="d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.656541 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.658773 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerStarted","Data":"f8df74ffa9a15ab6e35c3d33087292cd2230102ca094ce59fd95f87bf60c2a72"} Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.687365 4853 scope.go:117] "RemoveContainer" containerID="e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.694034 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.706122 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.718782 4853 scope.go:117] "RemoveContainer" containerID="d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372" Dec 09 17:23:37 crc kubenswrapper[4853]: E1209 17:23:37.719356 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372\": container with ID starting with d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372 not found: ID does not exist" containerID="d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.719419 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372"} err="failed to get container status \"d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372\": rpc error: code = NotFound desc = could not find container \"d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372\": container with ID starting with d19cf2e213f032daa6b1f088792d31f81bfdbf65f591b63124695b123c858372 not found: ID does not exist" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.719454 4853 scope.go:117] "RemoveContainer" containerID="e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.719731 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:37 crc kubenswrapper[4853]: E1209 17:23:37.719914 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40\": container with ID starting with e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40 not found: ID does not exist" containerID="e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.720002 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40"} err="failed to get container status \"e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40\": rpc error: code = NotFound desc = could not find container \"e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40\": container with ID starting with e3de79c31402a06383a7054f9a16332199c0264d90e134e1254becbf910bce40 not found: ID does not exist" Dec 09 17:23:37 crc kubenswrapper[4853]: E1209 17:23:37.720390 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-api" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.720414 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-api" Dec 09 17:23:37 crc kubenswrapper[4853]: E1209 17:23:37.720425 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-log" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.720434 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-log" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.720766 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-api" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.720797 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" containerName="nova-api-log" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.722352 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.724993 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.725386 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.725547 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.729756 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.863845 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.864010 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.864079 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-config-data\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.864127 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmvqg\" (UniqueName: \"kubernetes.io/projected/c612e107-4d2c-404b-9366-8d4bb35caa9a-kube-api-access-dmvqg\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.864161 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c612e107-4d2c-404b-9366-8d4bb35caa9a-logs\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.864203 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-public-tls-certs\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.966734 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmvqg\" (UniqueName: \"kubernetes.io/projected/c612e107-4d2c-404b-9366-8d4bb35caa9a-kube-api-access-dmvqg\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.966807 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c612e107-4d2c-404b-9366-8d4bb35caa9a-logs\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.966852 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-public-tls-certs\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.966956 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.967069 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.967136 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-config-data\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.968460 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c612e107-4d2c-404b-9366-8d4bb35caa9a-logs\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.974556 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.974578 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-public-tls-certs\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.974645 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-config-data\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.974651 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:37 crc kubenswrapper[4853]: I1209 17:23:37.985996 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmvqg\" (UniqueName: \"kubernetes.io/projected/c612e107-4d2c-404b-9366-8d4bb35caa9a-kube-api-access-dmvqg\") pod \"nova-api-0\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " pod="openstack/nova-api-0" Dec 09 17:23:38 crc kubenswrapper[4853]: I1209 17:23:38.148100 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:38.675026 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerStarted","Data":"d5c610012a8b7131ab4438259a745df256bb522a879b8f3f43f5a3126e163f67"} Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:39.470874 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:39.582884 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a0a5e5-59d2-44a9-a9fc-fb8578265225" path="/var/lib/kubelet/pods/63a0a5e5-59d2-44a9-a9fc-fb8578265225/volumes" Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:39.692950 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c612e107-4d2c-404b-9366-8d4bb35caa9a","Type":"ContainerStarted","Data":"ad96ab60fbf8d1862c5e2f009640a778a2aa97ad4ca2a8e5378c341257b40bcb"} Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:39.693005 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c612e107-4d2c-404b-9366-8d4bb35caa9a","Type":"ContainerStarted","Data":"81aa9b3bbc1308eeed44008cd20976d051d79e7621b48e49dda81d060cc0052b"} Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:39.710774 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerStarted","Data":"c8eca91b3d10d751ba9044539a565ccf93bf2b6850494f542dc05613a111b270"} Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:39.895373 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:39.914143 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:39.981304 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 17:23:39 crc kubenswrapper[4853]: I1209 17:23:39.983307 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.725243 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerStarted","Data":"5557a8ed4d24736afde9e48642f7b6abd22d43f208550189969f19bc03cab7bf"} Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.732261 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c612e107-4d2c-404b-9366-8d4bb35caa9a","Type":"ContainerStarted","Data":"f9e6ec9df1238d57904d954b0e54fb64b85391ad02465d2e98018bef8a0f9982"} Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.781157 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.785053 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.78502346 podStartE2EDuration="3.78502346s" podCreationTimestamp="2025-12-09 17:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:40.77853066 +0000 UTC m=+1647.713269872" watchObservedRunningTime="2025-12-09 17:23:40.78502346 +0000 UTC m=+1647.719762642" Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.952018 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-n25td"] Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.953634 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.955986 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.956172 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.961327 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n25td"] Dec 09 17:23:40 crc kubenswrapper[4853]: I1209 17:23:40.999880 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.000193 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.068773 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87b6q\" (UniqueName: \"kubernetes.io/projected/638b7c6f-30d1-4d07-8532-51736f5e74c5-kube-api-access-87b6q\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.068833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-config-data\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.068917 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.068974 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-scripts\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.121725 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.171217 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87b6q\" (UniqueName: \"kubernetes.io/projected/638b7c6f-30d1-4d07-8532-51736f5e74c5-kube-api-access-87b6q\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.171298 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-config-data\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.171376 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.171423 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-scripts\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.192050 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-scripts\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.199366 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-config-data\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.205956 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.208241 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87b6q\" (UniqueName: \"kubernetes.io/projected/638b7c6f-30d1-4d07-8532-51736f5e74c5-kube-api-access-87b6q\") pod \"nova-cell1-cell-mapping-n25td\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.213652 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zwbj9"] Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.213940 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" podUID="7086dd56-2ca2-4d9c-ab48-5837b89f117a" containerName="dnsmasq-dns" containerID="cri-o://d835ad1c0ab14fd20632c7945c31ed32a0697be7aa734d39e5519452ed869585" gracePeriod=10 Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.271438 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.754276 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerStarted","Data":"7591ff75fe2fe274b6d74ae2f84daded079492052474677a40990897961cee13"} Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.755717 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.769947 4853 generic.go:334] "Generic (PLEG): container finished" podID="7086dd56-2ca2-4d9c-ab48-5837b89f117a" containerID="d835ad1c0ab14fd20632c7945c31ed32a0697be7aa734d39e5519452ed869585" exitCode=0 Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.770260 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" event={"ID":"7086dd56-2ca2-4d9c-ab48-5837b89f117a","Type":"ContainerDied","Data":"d835ad1c0ab14fd20632c7945c31ed32a0697be7aa734d39e5519452ed869585"} Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.792912 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9661750599999999 podStartE2EDuration="5.792873057s" podCreationTimestamp="2025-12-09 17:23:36 +0000 UTC" firstStartedPulling="2025-12-09 17:23:37.542044346 +0000 UTC m=+1644.476783528" lastFinishedPulling="2025-12-09 17:23:41.368742343 +0000 UTC m=+1648.303481525" observedRunningTime="2025-12-09 17:23:41.787572637 +0000 UTC m=+1648.722311819" watchObservedRunningTime="2025-12-09 17:23:41.792873057 +0000 UTC m=+1648.727612239" Dec 09 17:23:41 crc kubenswrapper[4853]: I1209 17:23:41.891695 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.006166 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-svc\") pod \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.006457 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-sb\") pod \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.006567 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-nb\") pod \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.006644 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddngf\" (UniqueName: \"kubernetes.io/projected/7086dd56-2ca2-4d9c-ab48-5837b89f117a-kube-api-access-ddngf\") pod \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.012028 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n25td"] Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.014944 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7086dd56-2ca2-4d9c-ab48-5837b89f117a-kube-api-access-ddngf" (OuterVolumeSpecName: "kube-api-access-ddngf") pod "7086dd56-2ca2-4d9c-ab48-5837b89f117a" (UID: "7086dd56-2ca2-4d9c-ab48-5837b89f117a"). InnerVolumeSpecName "kube-api-access-ddngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.006683 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-config\") pod \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.021856 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-swift-storage-0\") pod \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\" (UID: \"7086dd56-2ca2-4d9c-ab48-5837b89f117a\") " Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.025240 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddngf\" (UniqueName: \"kubernetes.io/projected/7086dd56-2ca2-4d9c-ab48-5837b89f117a-kube-api-access-ddngf\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.088505 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7086dd56-2ca2-4d9c-ab48-5837b89f117a" (UID: "7086dd56-2ca2-4d9c-ab48-5837b89f117a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.100795 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7086dd56-2ca2-4d9c-ab48-5837b89f117a" (UID: "7086dd56-2ca2-4d9c-ab48-5837b89f117a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.129526 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7086dd56-2ca2-4d9c-ab48-5837b89f117a" (UID: "7086dd56-2ca2-4d9c-ab48-5837b89f117a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.139429 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.139463 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.139473 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.143275 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7086dd56-2ca2-4d9c-ab48-5837b89f117a" (UID: "7086dd56-2ca2-4d9c-ab48-5837b89f117a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.164285 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-config" (OuterVolumeSpecName: "config") pod "7086dd56-2ca2-4d9c-ab48-5837b89f117a" (UID: "7086dd56-2ca2-4d9c-ab48-5837b89f117a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.241692 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.241727 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7086dd56-2ca2-4d9c-ab48-5837b89f117a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.781810 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n25td" event={"ID":"638b7c6f-30d1-4d07-8532-51736f5e74c5","Type":"ContainerStarted","Data":"e73549315cd7e4c1ce3b6077db0953299515c9c4d0caa7a8fdf6cecd506431d9"} Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.782214 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n25td" event={"ID":"638b7c6f-30d1-4d07-8532-51736f5e74c5","Type":"ContainerStarted","Data":"c6faa14988046927e3e149af9ba807556894240fbff64023607a3495b2fd8319"} Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.784622 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" event={"ID":"7086dd56-2ca2-4d9c-ab48-5837b89f117a","Type":"ContainerDied","Data":"1941d328c703a3314df85feafaccc7290fc0d25d8c79929641017ab2253c46b0"} Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.784681 4853 scope.go:117] "RemoveContainer" containerID="d835ad1c0ab14fd20632c7945c31ed32a0697be7aa734d39e5519452ed869585" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.784756 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-zwbj9" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.808728 4853 scope.go:117] "RemoveContainer" containerID="4e4c061abbd3234034482daab3a616450459b3d239aaafd2996d4bd8b9ea4303" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.849034 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-n25td" podStartSLOduration=2.8357952060000002 podStartE2EDuration="2.835795206s" podCreationTimestamp="2025-12-09 17:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:42.834652306 +0000 UTC m=+1649.769391488" watchObservedRunningTime="2025-12-09 17:23:42.835795206 +0000 UTC m=+1649.770534388" Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.893724 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zwbj9"] Dec 09 17:23:42 crc kubenswrapper[4853]: I1209 17:23:42.903683 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-zwbj9"] Dec 09 17:23:43 crc kubenswrapper[4853]: I1209 17:23:43.581143 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7086dd56-2ca2-4d9c-ab48-5837b89f117a" path="/var/lib/kubelet/pods/7086dd56-2ca2-4d9c-ab48-5837b89f117a/volumes" Dec 09 17:23:47 crc kubenswrapper[4853]: I1209 17:23:47.854355 4853 generic.go:334] "Generic (PLEG): container finished" podID="638b7c6f-30d1-4d07-8532-51736f5e74c5" containerID="e73549315cd7e4c1ce3b6077db0953299515c9c4d0caa7a8fdf6cecd506431d9" exitCode=0 Dec 09 17:23:47 crc kubenswrapper[4853]: I1209 17:23:47.854383 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n25td" event={"ID":"638b7c6f-30d1-4d07-8532-51736f5e74c5","Type":"ContainerDied","Data":"e73549315cd7e4c1ce3b6077db0953299515c9c4d0caa7a8fdf6cecd506431d9"} Dec 09 17:23:48 crc kubenswrapper[4853]: I1209 17:23:48.149477 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 17:23:48 crc kubenswrapper[4853]: I1209 17:23:48.150367 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.165086 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.247:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.165582 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.247:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.348027 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.361402 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-scripts\") pod \"638b7c6f-30d1-4d07-8532-51736f5e74c5\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.361450 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-config-data\") pod \"638b7c6f-30d1-4d07-8532-51736f5e74c5\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.361672 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-combined-ca-bundle\") pod \"638b7c6f-30d1-4d07-8532-51736f5e74c5\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.361849 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87b6q\" (UniqueName: \"kubernetes.io/projected/638b7c6f-30d1-4d07-8532-51736f5e74c5-kube-api-access-87b6q\") pod \"638b7c6f-30d1-4d07-8532-51736f5e74c5\" (UID: \"638b7c6f-30d1-4d07-8532-51736f5e74c5\") " Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.373162 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-scripts" (OuterVolumeSpecName: "scripts") pod "638b7c6f-30d1-4d07-8532-51736f5e74c5" (UID: "638b7c6f-30d1-4d07-8532-51736f5e74c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.373773 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638b7c6f-30d1-4d07-8532-51736f5e74c5-kube-api-access-87b6q" (OuterVolumeSpecName: "kube-api-access-87b6q") pod "638b7c6f-30d1-4d07-8532-51736f5e74c5" (UID: "638b7c6f-30d1-4d07-8532-51736f5e74c5"). InnerVolumeSpecName "kube-api-access-87b6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.405739 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-config-data" (OuterVolumeSpecName: "config-data") pod "638b7c6f-30d1-4d07-8532-51736f5e74c5" (UID: "638b7c6f-30d1-4d07-8532-51736f5e74c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.422957 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "638b7c6f-30d1-4d07-8532-51736f5e74c5" (UID: "638b7c6f-30d1-4d07-8532-51736f5e74c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.467515 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.467571 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.467583 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b7c6f-30d1-4d07-8532-51736f5e74c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.467615 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87b6q\" (UniqueName: \"kubernetes.io/projected/638b7c6f-30d1-4d07-8532-51736f5e74c5-kube-api-access-87b6q\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.885169 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n25td" event={"ID":"638b7c6f-30d1-4d07-8532-51736f5e74c5","Type":"ContainerDied","Data":"c6faa14988046927e3e149af9ba807556894240fbff64023607a3495b2fd8319"} Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.885218 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6faa14988046927e3e149af9ba807556894240fbff64023607a3495b2fd8319" Dec 09 17:23:49 crc kubenswrapper[4853]: I1209 17:23:49.885299 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n25td" Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.019301 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.020899 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.028140 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.119949 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.120198 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-log" containerID="cri-o://ad96ab60fbf8d1862c5e2f009640a778a2aa97ad4ca2a8e5378c341257b40bcb" gracePeriod=30 Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.120255 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-api" containerID="cri-o://f9e6ec9df1238d57904d954b0e54fb64b85391ad02465d2e98018bef8a0f9982" gracePeriod=30 Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.147015 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.147272 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="74506648-399a-4884-b461-98b2d287223b" containerName="nova-scheduler-scheduler" containerID="cri-o://c08420e2c62aa5e2032f266181d0d8ac615c645c6dc36af2686b89182a483dfc" gracePeriod=30 Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.169534 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:50 crc kubenswrapper[4853]: E1209 17:23:50.586223 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c08420e2c62aa5e2032f266181d0d8ac615c645c6dc36af2686b89182a483dfc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 17:23:50 crc kubenswrapper[4853]: E1209 17:23:50.587767 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c08420e2c62aa5e2032f266181d0d8ac615c645c6dc36af2686b89182a483dfc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 17:23:50 crc kubenswrapper[4853]: E1209 17:23:50.591409 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c08420e2c62aa5e2032f266181d0d8ac615c645c6dc36af2686b89182a483dfc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 17:23:50 crc kubenswrapper[4853]: E1209 17:23:50.591470 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="74506648-399a-4884-b461-98b2d287223b" containerName="nova-scheduler-scheduler" Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.907566 4853 generic.go:334] "Generic (PLEG): container finished" podID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerID="ad96ab60fbf8d1862c5e2f009640a778a2aa97ad4ca2a8e5378c341257b40bcb" exitCode=143 Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.907745 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c612e107-4d2c-404b-9366-8d4bb35caa9a","Type":"ContainerDied","Data":"ad96ab60fbf8d1862c5e2f009640a778a2aa97ad4ca2a8e5378c341257b40bcb"} Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.940913 4853 generic.go:334] "Generic (PLEG): container finished" podID="74506648-399a-4884-b461-98b2d287223b" containerID="c08420e2c62aa5e2032f266181d0d8ac615c645c6dc36af2686b89182a483dfc" exitCode=0 Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.941001 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74506648-399a-4884-b461-98b2d287223b","Type":"ContainerDied","Data":"c08420e2c62aa5e2032f266181d0d8ac615c645c6dc36af2686b89182a483dfc"} Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.949952 4853 generic.go:334] "Generic (PLEG): container finished" podID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerID="64bf07429d0ea4be42944bebc322ad19165e2632f662cdeea3c343ab136d86db" exitCode=137 Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.949982 4853 generic.go:334] "Generic (PLEG): container finished" podID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerID="bf2689199ab5e6e2f1529ee37f6c2d6a879ac6f5f7040a8801874c1deacce110" exitCode=137 Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.950676 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerDied","Data":"64bf07429d0ea4be42944bebc322ad19165e2632f662cdeea3c343ab136d86db"} Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.950721 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerDied","Data":"bf2689199ab5e6e2f1529ee37f6c2d6a879ac6f5f7040a8801874c1deacce110"} Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.950731 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2cd9b803-6702-4701-816f-aa54f55b0ddc","Type":"ContainerDied","Data":"3eaa50c8259db947c2fc4b99fa1eb22ea36e712a9e5fbd417995a240c60ed759"} Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.950740 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eaa50c8259db947c2fc4b99fa1eb22ea36e712a9e5fbd417995a240c60ed759" Dec 09 17:23:50 crc kubenswrapper[4853]: I1209 17:23:50.965011 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.001532 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.036994 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-scripts\") pod \"2cd9b803-6702-4701-816f-aa54f55b0ddc\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.037046 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjj8g\" (UniqueName: \"kubernetes.io/projected/2cd9b803-6702-4701-816f-aa54f55b0ddc-kube-api-access-mjj8g\") pod \"2cd9b803-6702-4701-816f-aa54f55b0ddc\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.037108 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-config-data\") pod \"2cd9b803-6702-4701-816f-aa54f55b0ddc\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.037158 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-combined-ca-bundle\") pod \"2cd9b803-6702-4701-816f-aa54f55b0ddc\" (UID: \"2cd9b803-6702-4701-816f-aa54f55b0ddc\") " Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.049514 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd9b803-6702-4701-816f-aa54f55b0ddc-kube-api-access-mjj8g" (OuterVolumeSpecName: "kube-api-access-mjj8g") pod "2cd9b803-6702-4701-816f-aa54f55b0ddc" (UID: "2cd9b803-6702-4701-816f-aa54f55b0ddc"). InnerVolumeSpecName "kube-api-access-mjj8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.051108 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-scripts" (OuterVolumeSpecName: "scripts") pod "2cd9b803-6702-4701-816f-aa54f55b0ddc" (UID: "2cd9b803-6702-4701-816f-aa54f55b0ddc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.079682 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.141654 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf8lb\" (UniqueName: \"kubernetes.io/projected/74506648-399a-4884-b461-98b2d287223b-kube-api-access-kf8lb\") pod \"74506648-399a-4884-b461-98b2d287223b\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.141796 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-config-data\") pod \"74506648-399a-4884-b461-98b2d287223b\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.141839 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-combined-ca-bundle\") pod \"74506648-399a-4884-b461-98b2d287223b\" (UID: \"74506648-399a-4884-b461-98b2d287223b\") " Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.142449 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.142469 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjj8g\" (UniqueName: \"kubernetes.io/projected/2cd9b803-6702-4701-816f-aa54f55b0ddc-kube-api-access-mjj8g\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.148054 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74506648-399a-4884-b461-98b2d287223b-kube-api-access-kf8lb" (OuterVolumeSpecName: "kube-api-access-kf8lb") pod "74506648-399a-4884-b461-98b2d287223b" (UID: "74506648-399a-4884-b461-98b2d287223b"). InnerVolumeSpecName "kube-api-access-kf8lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.184712 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-config-data" (OuterVolumeSpecName: "config-data") pod "74506648-399a-4884-b461-98b2d287223b" (UID: "74506648-399a-4884-b461-98b2d287223b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.198725 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74506648-399a-4884-b461-98b2d287223b" (UID: "74506648-399a-4884-b461-98b2d287223b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.232474 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cd9b803-6702-4701-816f-aa54f55b0ddc" (UID: "2cd9b803-6702-4701-816f-aa54f55b0ddc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.251419 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf8lb\" (UniqueName: \"kubernetes.io/projected/74506648-399a-4884-b461-98b2d287223b-kube-api-access-kf8lb\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.251465 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.251478 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.251491 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74506648-399a-4884-b461-98b2d287223b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.279779 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-config-data" (OuterVolumeSpecName: "config-data") pod "2cd9b803-6702-4701-816f-aa54f55b0ddc" (UID: "2cd9b803-6702-4701-816f-aa54f55b0ddc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.353476 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd9b803-6702-4701-816f-aa54f55b0ddc-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.965214 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"74506648-399a-4884-b461-98b2d287223b","Type":"ContainerDied","Data":"014934c4e203430dffe29f9eed545b960074a810823c6879606460a3bba6364f"} Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.965251 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.965319 4853 scope.go:117] "RemoveContainer" containerID="c08420e2c62aa5e2032f266181d0d8ac615c645c6dc36af2686b89182a483dfc" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.965334 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.966008 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-log" containerID="cri-o://afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff" gracePeriod=30 Dec 09 17:23:51 crc kubenswrapper[4853]: I1209 17:23:51.966072 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-metadata" containerID="cri-o://71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439" gracePeriod=30 Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.009594 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.031682 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.057563 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.090495 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.107679 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 09 17:23:52 crc kubenswrapper[4853]: E1209 17:23:52.108246 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638b7c6f-30d1-4d07-8532-51736f5e74c5" containerName="nova-manage" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108264 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="638b7c6f-30d1-4d07-8532-51736f5e74c5" containerName="nova-manage" Dec 09 17:23:52 crc kubenswrapper[4853]: E1209 17:23:52.108284 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-listener" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108291 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-listener" Dec 09 17:23:52 crc kubenswrapper[4853]: E1209 17:23:52.108316 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7086dd56-2ca2-4d9c-ab48-5837b89f117a" containerName="dnsmasq-dns" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108322 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7086dd56-2ca2-4d9c-ab48-5837b89f117a" containerName="dnsmasq-dns" Dec 09 17:23:52 crc kubenswrapper[4853]: E1209 17:23:52.108336 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74506648-399a-4884-b461-98b2d287223b" containerName="nova-scheduler-scheduler" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108342 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="74506648-399a-4884-b461-98b2d287223b" containerName="nova-scheduler-scheduler" Dec 09 17:23:52 crc kubenswrapper[4853]: E1209 17:23:52.108355 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-api" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108361 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-api" Dec 09 17:23:52 crc kubenswrapper[4853]: E1209 17:23:52.108371 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-notifier" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108377 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-notifier" Dec 09 17:23:52 crc kubenswrapper[4853]: E1209 17:23:52.108383 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-evaluator" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108391 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-evaluator" Dec 09 17:23:52 crc kubenswrapper[4853]: E1209 17:23:52.108406 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7086dd56-2ca2-4d9c-ab48-5837b89f117a" containerName="init" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108412 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7086dd56-2ca2-4d9c-ab48-5837b89f117a" containerName="init" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108674 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="74506648-399a-4884-b461-98b2d287223b" containerName="nova-scheduler-scheduler" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108694 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-notifier" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108707 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="638b7c6f-30d1-4d07-8532-51736f5e74c5" containerName="nova-manage" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108716 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-api" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108731 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-listener" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108742 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" containerName="aodh-evaluator" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.108756 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7086dd56-2ca2-4d9c-ab48-5837b89f117a" containerName="dnsmasq-dns" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.110777 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.118055 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.118279 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.118459 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.118752 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.125749 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.127627 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.130288 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kq94v" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.132879 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.156688 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.180039 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.180950 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvpzv\" (UniqueName: \"kubernetes.io/projected/0f94de49-e6de-4df0-8615-1f037e3c6ac1-kube-api-access-mvpzv\") pod \"nova-scheduler-0\" (UID: \"0f94de49-e6de-4df0-8615-1f037e3c6ac1\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.181095 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-combined-ca-bundle\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.181146 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-public-tls-certs\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.181166 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-scripts\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.181192 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbxll\" (UniqueName: \"kubernetes.io/projected/67d920ff-76fb-42a7-aff2-252d556a1d10-kube-api-access-mbxll\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.181382 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f94de49-e6de-4df0-8615-1f037e3c6ac1-config-data\") pod \"nova-scheduler-0\" (UID: \"0f94de49-e6de-4df0-8615-1f037e3c6ac1\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.181405 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-config-data\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.181461 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-internal-tls-certs\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.181556 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f94de49-e6de-4df0-8615-1f037e3c6ac1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f94de49-e6de-4df0-8615-1f037e3c6ac1\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.294487 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-combined-ca-bundle\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.299112 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-public-tls-certs\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.299291 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-scripts\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.299434 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbxll\" (UniqueName: \"kubernetes.io/projected/67d920ff-76fb-42a7-aff2-252d556a1d10-kube-api-access-mbxll\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.307894 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f94de49-e6de-4df0-8615-1f037e3c6ac1-config-data\") pod \"nova-scheduler-0\" (UID: \"0f94de49-e6de-4df0-8615-1f037e3c6ac1\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.308274 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-config-data\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.308367 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-internal-tls-certs\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.308513 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f94de49-e6de-4df0-8615-1f037e3c6ac1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f94de49-e6de-4df0-8615-1f037e3c6ac1\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.308677 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvpzv\" (UniqueName: \"kubernetes.io/projected/0f94de49-e6de-4df0-8615-1f037e3c6ac1-kube-api-access-mvpzv\") pod \"nova-scheduler-0\" (UID: \"0f94de49-e6de-4df0-8615-1f037e3c6ac1\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.316234 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-combined-ca-bundle\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.320225 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-scripts\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.321094 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f94de49-e6de-4df0-8615-1f037e3c6ac1-config-data\") pod \"nova-scheduler-0\" (UID: \"0f94de49-e6de-4df0-8615-1f037e3c6ac1\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.324554 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f94de49-e6de-4df0-8615-1f037e3c6ac1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f94de49-e6de-4df0-8615-1f037e3c6ac1\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.324748 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbxll\" (UniqueName: \"kubernetes.io/projected/67d920ff-76fb-42a7-aff2-252d556a1d10-kube-api-access-mbxll\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.325398 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-config-data\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.325642 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-internal-tls-certs\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.338509 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67d920ff-76fb-42a7-aff2-252d556a1d10-public-tls-certs\") pod \"aodh-0\" (UID: \"67d920ff-76fb-42a7-aff2-252d556a1d10\") " pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.355332 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvpzv\" (UniqueName: \"kubernetes.io/projected/0f94de49-e6de-4df0-8615-1f037e3c6ac1-kube-api-access-mvpzv\") pod \"nova-scheduler-0\" (UID: \"0f94de49-e6de-4df0-8615-1f037e3c6ac1\") " pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.457382 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.478759 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.973522 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.978291 4853 generic.go:334] "Generic (PLEG): container finished" podID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerID="afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff" exitCode=143 Dec 09 17:23:52 crc kubenswrapper[4853]: I1209 17:23:52.978367 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6daa3c8b-a296-4ed8-9556-7653e9f59f44","Type":"ContainerDied","Data":"afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff"} Dec 09 17:23:53 crc kubenswrapper[4853]: I1209 17:23:53.114920 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 17:23:53 crc kubenswrapper[4853]: I1209 17:23:53.581542 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd9b803-6702-4701-816f-aa54f55b0ddc" path="/var/lib/kubelet/pods/2cd9b803-6702-4701-816f-aa54f55b0ddc/volumes" Dec 09 17:23:53 crc kubenswrapper[4853]: I1209 17:23:53.586845 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74506648-399a-4884-b461-98b2d287223b" path="/var/lib/kubelet/pods/74506648-399a-4884-b461-98b2d287223b/volumes" Dec 09 17:23:53 crc kubenswrapper[4853]: I1209 17:23:53.991998 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f94de49-e6de-4df0-8615-1f037e3c6ac1","Type":"ContainerStarted","Data":"7674c26879e5a078a8eeff3f42ae6454a505931c7986253afd7535c991b322bd"} Dec 09 17:23:53 crc kubenswrapper[4853]: I1209 17:23:53.992048 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f94de49-e6de-4df0-8615-1f037e3c6ac1","Type":"ContainerStarted","Data":"5e119ad6bbe16c9bff026169b4f9bdbc7cba53e2b6f0407916ee54ec9d0b929d"} Dec 09 17:23:53 crc kubenswrapper[4853]: I1209 17:23:53.993976 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"67d920ff-76fb-42a7-aff2-252d556a1d10","Type":"ContainerStarted","Data":"32521bd1cf288ad1fc06a5da6c1c7b528ea8a6fa22455d9b66ed87ec23aed163"} Dec 09 17:23:53 crc kubenswrapper[4853]: I1209 17:23:53.994011 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"67d920ff-76fb-42a7-aff2-252d556a1d10","Type":"ContainerStarted","Data":"ce4a6461d54ff0543a80f2bdf99cfc841e8e88565e1b9d4845aa0bbc7e7c3914"} Dec 09 17:23:54 crc kubenswrapper[4853]: I1209 17:23:54.011550 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.011531458 podStartE2EDuration="2.011531458s" podCreationTimestamp="2025-12-09 17:23:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:54.009026962 +0000 UTC m=+1660.943766154" watchObservedRunningTime="2025-12-09 17:23:54.011531458 +0000 UTC m=+1660.946270660" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.010196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"67d920ff-76fb-42a7-aff2-252d556a1d10","Type":"ContainerStarted","Data":"fdcb11fd1326849c637df96acaafeebf65e14617d6c5cdda1f5bfdb0a5ac4e40"} Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.012132 4853 generic.go:334] "Generic (PLEG): container finished" podID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerID="f9e6ec9df1238d57904d954b0e54fb64b85391ad02465d2e98018bef8a0f9982" exitCode=0 Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.013234 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c612e107-4d2c-404b-9366-8d4bb35caa9a","Type":"ContainerDied","Data":"f9e6ec9df1238d57904d954b0e54fb64b85391ad02465d2e98018bef8a0f9982"} Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.013261 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c612e107-4d2c-404b-9366-8d4bb35caa9a","Type":"ContainerDied","Data":"81aa9b3bbc1308eeed44008cd20976d051d79e7621b48e49dda81d060cc0052b"} Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.013271 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81aa9b3bbc1308eeed44008cd20976d051d79e7621b48e49dda81d060cc0052b" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.079157 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.176268 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-public-tls-certs\") pod \"c612e107-4d2c-404b-9366-8d4bb35caa9a\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.176328 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-combined-ca-bundle\") pod \"c612e107-4d2c-404b-9366-8d4bb35caa9a\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.176431 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmvqg\" (UniqueName: \"kubernetes.io/projected/c612e107-4d2c-404b-9366-8d4bb35caa9a-kube-api-access-dmvqg\") pod \"c612e107-4d2c-404b-9366-8d4bb35caa9a\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.176454 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c612e107-4d2c-404b-9366-8d4bb35caa9a-logs\") pod \"c612e107-4d2c-404b-9366-8d4bb35caa9a\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.176482 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-internal-tls-certs\") pod \"c612e107-4d2c-404b-9366-8d4bb35caa9a\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.176557 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-config-data\") pod \"c612e107-4d2c-404b-9366-8d4bb35caa9a\" (UID: \"c612e107-4d2c-404b-9366-8d4bb35caa9a\") " Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.177815 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c612e107-4d2c-404b-9366-8d4bb35caa9a-logs" (OuterVolumeSpecName: "logs") pod "c612e107-4d2c-404b-9366-8d4bb35caa9a" (UID: "c612e107-4d2c-404b-9366-8d4bb35caa9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.181843 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c612e107-4d2c-404b-9366-8d4bb35caa9a-kube-api-access-dmvqg" (OuterVolumeSpecName: "kube-api-access-dmvqg") pod "c612e107-4d2c-404b-9366-8d4bb35caa9a" (UID: "c612e107-4d2c-404b-9366-8d4bb35caa9a"). InnerVolumeSpecName "kube-api-access-dmvqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.219805 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-config-data" (OuterVolumeSpecName: "config-data") pod "c612e107-4d2c-404b-9366-8d4bb35caa9a" (UID: "c612e107-4d2c-404b-9366-8d4bb35caa9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.221335 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": read tcp 10.217.0.2:47528->10.217.0.244:8775: read: connection reset by peer" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.221368 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": read tcp 10.217.0.2:47530->10.217.0.244:8775: read: connection reset by peer" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.231167 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c612e107-4d2c-404b-9366-8d4bb35caa9a" (UID: "c612e107-4d2c-404b-9366-8d4bb35caa9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.261193 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c612e107-4d2c-404b-9366-8d4bb35caa9a" (UID: "c612e107-4d2c-404b-9366-8d4bb35caa9a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.272802 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c612e107-4d2c-404b-9366-8d4bb35caa9a" (UID: "c612e107-4d2c-404b-9366-8d4bb35caa9a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.282503 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.282545 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.282565 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmvqg\" (UniqueName: \"kubernetes.io/projected/c612e107-4d2c-404b-9366-8d4bb35caa9a-kube-api-access-dmvqg\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.282584 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c612e107-4d2c-404b-9366-8d4bb35caa9a-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.282621 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.282638 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c612e107-4d2c-404b-9366-8d4bb35caa9a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:55 crc kubenswrapper[4853]: I1209 17:23:55.953524 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.014434 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-config-data\") pod \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.014508 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlqhl\" (UniqueName: \"kubernetes.io/projected/6daa3c8b-a296-4ed8-9556-7653e9f59f44-kube-api-access-jlqhl\") pod \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.014629 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6daa3c8b-a296-4ed8-9556-7653e9f59f44-logs\") pod \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.014676 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-combined-ca-bundle\") pod \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.014876 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-nova-metadata-tls-certs\") pod \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\" (UID: \"6daa3c8b-a296-4ed8-9556-7653e9f59f44\") " Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.017700 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6daa3c8b-a296-4ed8-9556-7653e9f59f44-logs" (OuterVolumeSpecName: "logs") pod "6daa3c8b-a296-4ed8-9556-7653e9f59f44" (UID: "6daa3c8b-a296-4ed8-9556-7653e9f59f44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.021835 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6daa3c8b-a296-4ed8-9556-7653e9f59f44-kube-api-access-jlqhl" (OuterVolumeSpecName: "kube-api-access-jlqhl") pod "6daa3c8b-a296-4ed8-9556-7653e9f59f44" (UID: "6daa3c8b-a296-4ed8-9556-7653e9f59f44"). InnerVolumeSpecName "kube-api-access-jlqhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.033698 4853 generic.go:334] "Generic (PLEG): container finished" podID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerID="71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439" exitCode=0 Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.033843 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.034011 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6daa3c8b-a296-4ed8-9556-7653e9f59f44","Type":"ContainerDied","Data":"71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439"} Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.034064 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6daa3c8b-a296-4ed8-9556-7653e9f59f44","Type":"ContainerDied","Data":"c4261e18df0631488a7734bff547cdfc84d675b2a756d3ebaa616844293e1bcc"} Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.034082 4853 scope.go:117] "RemoveContainer" containerID="71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.050408 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.050692 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"67d920ff-76fb-42a7-aff2-252d556a1d10","Type":"ContainerStarted","Data":"3fdd4e9e7e1a5c417b8fd6d6171edf0132d350f1184e34686799adb893ff8a1c"} Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.084378 4853 scope.go:117] "RemoveContainer" containerID="afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.104797 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.119783 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6daa3c8b-a296-4ed8-9556-7653e9f59f44-logs\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.119827 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlqhl\" (UniqueName: \"kubernetes.io/projected/6daa3c8b-a296-4ed8-9556-7653e9f59f44-kube-api-access-jlqhl\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.120645 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.136456 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6daa3c8b-a296-4ed8-9556-7653e9f59f44" (UID: "6daa3c8b-a296-4ed8-9556-7653e9f59f44"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.136904 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:56 crc kubenswrapper[4853]: E1209 17:23:56.137670 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-api" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.137696 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-api" Dec 09 17:23:56 crc kubenswrapper[4853]: E1209 17:23:56.137722 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-log" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.137729 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-log" Dec 09 17:23:56 crc kubenswrapper[4853]: E1209 17:23:56.137748 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-log" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.137754 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-log" Dec 09 17:23:56 crc kubenswrapper[4853]: E1209 17:23:56.137774 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-metadata" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.137781 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-metadata" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.138007 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-log" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.138037 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-log" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.138045 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" containerName="nova-metadata-metadata" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.138058 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" containerName="nova-api-api" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.139584 4853 scope.go:117] "RemoveContainer" containerID="71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439" Dec 09 17:23:56 crc kubenswrapper[4853]: E1209 17:23:56.143013 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439\": container with ID starting with 71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439 not found: ID does not exist" containerID="71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.143061 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439"} err="failed to get container status \"71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439\": rpc error: code = NotFound desc = could not find container \"71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439\": container with ID starting with 71cbac19d5ae5e9a802f4150ca8777ab892486932eab5c60e0b800f26b638439 not found: ID does not exist" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.143087 4853 scope.go:117] "RemoveContainer" containerID="afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff" Dec 09 17:23:56 crc kubenswrapper[4853]: E1209 17:23:56.151837 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff\": container with ID starting with afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff not found: ID does not exist" containerID="afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.151881 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff"} err="failed to get container status \"afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff\": rpc error: code = NotFound desc = could not find container \"afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff\": container with ID starting with afd990ebfd03c9dfb813fbf3c303da6126f323408d9cacc58689d48f602b15ff not found: ID does not exist" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.155056 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.163008 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.163143 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.163291 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.170303 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.199547 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-config-data" (OuterVolumeSpecName: "config-data") pod "6daa3c8b-a296-4ed8-9556-7653e9f59f44" (UID: "6daa3c8b-a296-4ed8-9556-7653e9f59f44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.202484 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6daa3c8b-a296-4ed8-9556-7653e9f59f44" (UID: "6daa3c8b-a296-4ed8-9556-7653e9f59f44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.220719 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.220819 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.220866 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f03d1537-be82-46ba-a69f-06732942a6e6-logs\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.220903 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-public-tls-certs\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.220949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vz7\" (UniqueName: \"kubernetes.io/projected/f03d1537-be82-46ba-a69f-06732942a6e6-kube-api-access-92vz7\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.221090 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-config-data\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.221198 4853 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.221231 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.221245 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6daa3c8b-a296-4ed8-9556-7653e9f59f44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.323341 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.323407 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f03d1537-be82-46ba-a69f-06732942a6e6-logs\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.323440 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-public-tls-certs\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.323479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92vz7\" (UniqueName: \"kubernetes.io/projected/f03d1537-be82-46ba-a69f-06732942a6e6-kube-api-access-92vz7\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.323591 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-config-data\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.323645 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.327974 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f03d1537-be82-46ba-a69f-06732942a6e6-logs\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.328770 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-public-tls-certs\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.329151 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.330217 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-config-data\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.336522 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d1537-be82-46ba-a69f-06732942a6e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.344945 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92vz7\" (UniqueName: \"kubernetes.io/projected/f03d1537-be82-46ba-a69f-06732942a6e6-kube-api-access-92vz7\") pod \"nova-api-0\" (UID: \"f03d1537-be82-46ba-a69f-06732942a6e6\") " pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.381684 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.392483 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.404894 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.407155 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.408954 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.413966 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.424402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3566a2-2617-4b9d-951e-a124a027c307-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.424525 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3566a2-2617-4b9d-951e-a124a027c307-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.424563 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c3566a2-2617-4b9d-951e-a124a027c307-logs\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.424578 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mpqc\" (UniqueName: \"kubernetes.io/projected/2c3566a2-2617-4b9d-951e-a124a027c307-kube-api-access-8mpqc\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.424639 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3566a2-2617-4b9d-951e-a124a027c307-config-data\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.437167 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.496131 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.526722 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c3566a2-2617-4b9d-951e-a124a027c307-logs\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.527076 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mpqc\" (UniqueName: \"kubernetes.io/projected/2c3566a2-2617-4b9d-951e-a124a027c307-kube-api-access-8mpqc\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.527182 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3566a2-2617-4b9d-951e-a124a027c307-config-data\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.527229 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c3566a2-2617-4b9d-951e-a124a027c307-logs\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.527360 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3566a2-2617-4b9d-951e-a124a027c307-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.527516 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3566a2-2617-4b9d-951e-a124a027c307-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.531297 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c3566a2-2617-4b9d-951e-a124a027c307-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.531446 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3566a2-2617-4b9d-951e-a124a027c307-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.531690 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c3566a2-2617-4b9d-951e-a124a027c307-config-data\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.546715 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mpqc\" (UniqueName: \"kubernetes.io/projected/2c3566a2-2617-4b9d-951e-a124a027c307-kube-api-access-8mpqc\") pod \"nova-metadata-0\" (UID: \"2c3566a2-2617-4b9d-951e-a124a027c307\") " pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.731027 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 17:23:56 crc kubenswrapper[4853]: I1209 17:23:56.985795 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 17:23:56 crc kubenswrapper[4853]: W1209 17:23:56.995772 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf03d1537_be82_46ba_a69f_06732942a6e6.slice/crio-450bb46fc6fa1b5b4ab646178434d4518bbcc2fa38d42ded8db8fe707947e1c0 WatchSource:0}: Error finding container 450bb46fc6fa1b5b4ab646178434d4518bbcc2fa38d42ded8db8fe707947e1c0: Status 404 returned error can't find the container with id 450bb46fc6fa1b5b4ab646178434d4518bbcc2fa38d42ded8db8fe707947e1c0 Dec 09 17:23:57 crc kubenswrapper[4853]: I1209 17:23:57.066245 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"67d920ff-76fb-42a7-aff2-252d556a1d10","Type":"ContainerStarted","Data":"fd1420a87ed089dfc71dc99db6fc5c6d7afa6556208954b2ed690bfe2e21d00d"} Dec 09 17:23:57 crc kubenswrapper[4853]: I1209 17:23:57.068081 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f03d1537-be82-46ba-a69f-06732942a6e6","Type":"ContainerStarted","Data":"450bb46fc6fa1b5b4ab646178434d4518bbcc2fa38d42ded8db8fe707947e1c0"} Dec 09 17:23:57 crc kubenswrapper[4853]: I1209 17:23:57.122271 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.34628988 podStartE2EDuration="5.122248122s" podCreationTimestamp="2025-12-09 17:23:52 +0000 UTC" firstStartedPulling="2025-12-09 17:23:52.971542346 +0000 UTC m=+1659.906281528" lastFinishedPulling="2025-12-09 17:23:55.747500588 +0000 UTC m=+1662.682239770" observedRunningTime="2025-12-09 17:23:57.086504391 +0000 UTC m=+1664.021243573" watchObservedRunningTime="2025-12-09 17:23:57.122248122 +0000 UTC m=+1664.056987304" Dec 09 17:23:57 crc kubenswrapper[4853]: I1209 17:23:57.306404 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 17:23:57 crc kubenswrapper[4853]: W1209 17:23:57.308735 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c3566a2_2617_4b9d_951e_a124a027c307.slice/crio-77de6f303cccd6d0680905cf441f75c3eec3efbf7deaa9da20b33185711a4832 WatchSource:0}: Error finding container 77de6f303cccd6d0680905cf441f75c3eec3efbf7deaa9da20b33185711a4832: Status 404 returned error can't find the container with id 77de6f303cccd6d0680905cf441f75c3eec3efbf7deaa9da20b33185711a4832 Dec 09 17:23:57 crc kubenswrapper[4853]: I1209 17:23:57.479848 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 17:23:57 crc kubenswrapper[4853]: I1209 17:23:57.581009 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6daa3c8b-a296-4ed8-9556-7653e9f59f44" path="/var/lib/kubelet/pods/6daa3c8b-a296-4ed8-9556-7653e9f59f44/volumes" Dec 09 17:23:57 crc kubenswrapper[4853]: I1209 17:23:57.581781 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c612e107-4d2c-404b-9366-8d4bb35caa9a" path="/var/lib/kubelet/pods/c612e107-4d2c-404b-9366-8d4bb35caa9a/volumes" Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.082737 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c3566a2-2617-4b9d-951e-a124a027c307","Type":"ContainerStarted","Data":"b842d272270a14b51c3a8710396938e0635501ced799caef402b7d0246388e46"} Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.083047 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c3566a2-2617-4b9d-951e-a124a027c307","Type":"ContainerStarted","Data":"5ceb0fb0a76087f0e0b6fc9cec1b887aa2bf96fd690ee2a9741ee80984d3ca34"} Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.083064 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2c3566a2-2617-4b9d-951e-a124a027c307","Type":"ContainerStarted","Data":"77de6f303cccd6d0680905cf441f75c3eec3efbf7deaa9da20b33185711a4832"} Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.084777 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f03d1537-be82-46ba-a69f-06732942a6e6","Type":"ContainerStarted","Data":"866c2133dbaa1f7cd268bcdf71b5e56b3b489ae5f7a7cf3f6dc000066d2717f2"} Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.084815 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f03d1537-be82-46ba-a69f-06732942a6e6","Type":"ContainerStarted","Data":"b4f939f4d0b6c3d8c9a94f587d4bcd1433a680e8611652602eb06c4f34fe0ccc"} Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.139815 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.139792243 podStartE2EDuration="2.139792243s" podCreationTimestamp="2025-12-09 17:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:58.138240321 +0000 UTC m=+1665.072979493" watchObservedRunningTime="2025-12-09 17:23:58.139792243 +0000 UTC m=+1665.074531425" Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.209462 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.209439486 podStartE2EDuration="2.209439486s" podCreationTimestamp="2025-12-09 17:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:23:58.174934238 +0000 UTC m=+1665.109673420" watchObservedRunningTime="2025-12-09 17:23:58.209439486 +0000 UTC m=+1665.144178668" Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.592839 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.592925 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.592988 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.594357 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:23:58 crc kubenswrapper[4853]: I1209 17:23:58.594481 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" gracePeriod=600 Dec 09 17:23:58 crc kubenswrapper[4853]: E1209 17:23:58.763699 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:23:59 crc kubenswrapper[4853]: I1209 17:23:59.161216 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" exitCode=0 Dec 09 17:23:59 crc kubenswrapper[4853]: I1209 17:23:59.163562 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740"} Dec 09 17:23:59 crc kubenswrapper[4853]: I1209 17:23:59.163627 4853 scope.go:117] "RemoveContainer" containerID="f6f987a0c43c35d8870f761c6c8a9e4bd42afed53db05f41b90af0f3121049ce" Dec 09 17:23:59 crc kubenswrapper[4853]: I1209 17:23:59.164612 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:23:59 crc kubenswrapper[4853]: E1209 17:23:59.164901 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:24:01 crc kubenswrapper[4853]: I1209 17:24:01.732581 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 17:24:01 crc kubenswrapper[4853]: I1209 17:24:01.733269 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 17:24:02 crc kubenswrapper[4853]: I1209 17:24:02.479039 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 17:24:02 crc kubenswrapper[4853]: I1209 17:24:02.560969 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 17:24:03 crc kubenswrapper[4853]: I1209 17:24:03.262140 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 17:24:06 crc kubenswrapper[4853]: I1209 17:24:06.496825 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 17:24:06 crc kubenswrapper[4853]: I1209 17:24:06.497190 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 17:24:06 crc kubenswrapper[4853]: I1209 17:24:06.731925 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 17:24:06 crc kubenswrapper[4853]: I1209 17:24:06.731978 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 17:24:07 crc kubenswrapper[4853]: I1209 17:24:07.051889 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 17:24:07 crc kubenswrapper[4853]: I1209 17:24:07.510951 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f03d1537-be82-46ba-a69f-06732942a6e6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.251:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 17:24:07 crc kubenswrapper[4853]: I1209 17:24:07.510974 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f03d1537-be82-46ba-a69f-06732942a6e6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.251:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 17:24:07 crc kubenswrapper[4853]: I1209 17:24:07.745823 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2c3566a2-2617-4b9d-951e-a124a027c307" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.252:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 17:24:07 crc kubenswrapper[4853]: I1209 17:24:07.746095 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2c3566a2-2617-4b9d-951e-a124a027c307" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.252:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 17:24:10 crc kubenswrapper[4853]: I1209 17:24:10.568037 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:24:10 crc kubenswrapper[4853]: E1209 17:24:10.568793 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:24:11 crc kubenswrapper[4853]: I1209 17:24:11.406568 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 17:24:11 crc kubenswrapper[4853]: I1209 17:24:11.407206 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="97a2bf6d-2b94-43ab-92e9-7a2355ae7df5" containerName="kube-state-metrics" containerID="cri-o://c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff" gracePeriod=30 Dec 09 17:24:11 crc kubenswrapper[4853]: I1209 17:24:11.650267 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 17:24:11 crc kubenswrapper[4853]: I1209 17:24:11.650461 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="62a55e7b-220c-44a9-acdd-ad06588c155e" containerName="mysqld-exporter" containerID="cri-o://cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1" gracePeriod=30 Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.125015 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.266493 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.275764 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c942\" (UniqueName: \"kubernetes.io/projected/97a2bf6d-2b94-43ab-92e9-7a2355ae7df5-kube-api-access-6c942\") pod \"97a2bf6d-2b94-43ab-92e9-7a2355ae7df5\" (UID: \"97a2bf6d-2b94-43ab-92e9-7a2355ae7df5\") " Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.308934 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a2bf6d-2b94-43ab-92e9-7a2355ae7df5-kube-api-access-6c942" (OuterVolumeSpecName: "kube-api-access-6c942") pod "97a2bf6d-2b94-43ab-92e9-7a2355ae7df5" (UID: "97a2bf6d-2b94-43ab-92e9-7a2355ae7df5"). InnerVolumeSpecName "kube-api-access-6c942". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.361772 4853 generic.go:334] "Generic (PLEG): container finished" podID="62a55e7b-220c-44a9-acdd-ad06588c155e" containerID="cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1" exitCode=2 Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.361870 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.362716 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"62a55e7b-220c-44a9-acdd-ad06588c155e","Type":"ContainerDied","Data":"cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1"} Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.362744 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"62a55e7b-220c-44a9-acdd-ad06588c155e","Type":"ContainerDied","Data":"76886e3a6dcf2128f962e37bf5cadf5c2d2de317a80a3295f203ed22ea9ace1f"} Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.362761 4853 scope.go:117] "RemoveContainer" containerID="cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.376910 4853 generic.go:334] "Generic (PLEG): container finished" podID="97a2bf6d-2b94-43ab-92e9-7a2355ae7df5" containerID="c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff" exitCode=2 Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.376957 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a2bf6d-2b94-43ab-92e9-7a2355ae7df5","Type":"ContainerDied","Data":"c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff"} Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.376982 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a2bf6d-2b94-43ab-92e9-7a2355ae7df5","Type":"ContainerDied","Data":"eb53d0c671b4cf0081fa46f1fd4a5740c5358fb06240fb4281e3e3f869fb768d"} Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.377034 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.381401 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-config-data\") pod \"62a55e7b-220c-44a9-acdd-ad06588c155e\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.381669 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-combined-ca-bundle\") pod \"62a55e7b-220c-44a9-acdd-ad06588c155e\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.381835 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t59dq\" (UniqueName: \"kubernetes.io/projected/62a55e7b-220c-44a9-acdd-ad06588c155e-kube-api-access-t59dq\") pod \"62a55e7b-220c-44a9-acdd-ad06588c155e\" (UID: \"62a55e7b-220c-44a9-acdd-ad06588c155e\") " Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.382344 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c942\" (UniqueName: \"kubernetes.io/projected/97a2bf6d-2b94-43ab-92e9-7a2355ae7df5-kube-api-access-6c942\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.397619 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a55e7b-220c-44a9-acdd-ad06588c155e-kube-api-access-t59dq" (OuterVolumeSpecName: "kube-api-access-t59dq") pod "62a55e7b-220c-44a9-acdd-ad06588c155e" (UID: "62a55e7b-220c-44a9-acdd-ad06588c155e"). InnerVolumeSpecName "kube-api-access-t59dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.437058 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62a55e7b-220c-44a9-acdd-ad06588c155e" (UID: "62a55e7b-220c-44a9-acdd-ad06588c155e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.437832 4853 scope.go:117] "RemoveContainer" containerID="cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1" Dec 09 17:24:12 crc kubenswrapper[4853]: E1209 17:24:12.441914 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1\": container with ID starting with cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1 not found: ID does not exist" containerID="cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.441972 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1"} err="failed to get container status \"cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1\": rpc error: code = NotFound desc = could not find container \"cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1\": container with ID starting with cd59d7a3fd61d6b941bbf0421feb738640c363ed9773dd2fec0c76970f4d23b1 not found: ID does not exist" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.442001 4853 scope.go:117] "RemoveContainer" containerID="c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.457077 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.479577 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.485030 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t59dq\" (UniqueName: \"kubernetes.io/projected/62a55e7b-220c-44a9-acdd-ad06588c155e-kube-api-access-t59dq\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.485054 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.509521 4853 scope.go:117] "RemoveContainer" containerID="c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.514124 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 17:24:12 crc kubenswrapper[4853]: E1209 17:24:12.525863 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff\": container with ID starting with c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff not found: ID does not exist" containerID="c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.525915 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff"} err="failed to get container status \"c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff\": rpc error: code = NotFound desc = could not find container \"c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff\": container with ID starting with c8e8a17ac4914c0aa122faf4953f3bb687cad7ec7447eaa128442de548f6adff not found: ID does not exist" Dec 09 17:24:12 crc kubenswrapper[4853]: E1209 17:24:12.527159 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a55e7b-220c-44a9-acdd-ad06588c155e" containerName="mysqld-exporter" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.527198 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a55e7b-220c-44a9-acdd-ad06588c155e" containerName="mysqld-exporter" Dec 09 17:24:12 crc kubenswrapper[4853]: E1209 17:24:12.527240 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a2bf6d-2b94-43ab-92e9-7a2355ae7df5" containerName="kube-state-metrics" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.527247 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a2bf6d-2b94-43ab-92e9-7a2355ae7df5" containerName="kube-state-metrics" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.527884 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a55e7b-220c-44a9-acdd-ad06588c155e" containerName="mysqld-exporter" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.527926 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a2bf6d-2b94-43ab-92e9-7a2355ae7df5" containerName="kube-state-metrics" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.529805 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.536211 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.536451 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.538669 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-config-data" (OuterVolumeSpecName: "config-data") pod "62a55e7b-220c-44a9-acdd-ad06588c155e" (UID: "62a55e7b-220c-44a9-acdd-ad06588c155e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.566659 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.586886 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a55e7b-220c-44a9-acdd-ad06588c155e-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.688891 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa877e16-b821-4ef2-8840-806f276e784c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.688976 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa877e16-b821-4ef2-8840-806f276e784c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.688996 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa877e16-b821-4ef2-8840-806f276e784c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.689017 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt77m\" (UniqueName: \"kubernetes.io/projected/fa877e16-b821-4ef2-8840-806f276e784c-kube-api-access-lt77m\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.694877 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.707300 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.717149 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.719010 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.722944 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.723041 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.739957 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.791125 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa877e16-b821-4ef2-8840-806f276e784c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.791236 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa877e16-b821-4ef2-8840-806f276e784c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.791261 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa877e16-b821-4ef2-8840-806f276e784c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.791440 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt77m\" (UniqueName: \"kubernetes.io/projected/fa877e16-b821-4ef2-8840-806f276e784c-kube-api-access-lt77m\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.797991 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa877e16-b821-4ef2-8840-806f276e784c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.798001 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa877e16-b821-4ef2-8840-806f276e784c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.798002 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa877e16-b821-4ef2-8840-806f276e784c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.823846 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt77m\" (UniqueName: \"kubernetes.io/projected/fa877e16-b821-4ef2-8840-806f276e784c-kube-api-access-lt77m\") pod \"kube-state-metrics-0\" (UID: \"fa877e16-b821-4ef2-8840-806f276e784c\") " pod="openstack/kube-state-metrics-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.893831 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwvv\" (UniqueName: \"kubernetes.io/projected/38030366-fb21-422a-8e22-db3aa78915ea-kube-api-access-jbwvv\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.894050 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/38030366-fb21-422a-8e22-db3aa78915ea-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.894118 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38030366-fb21-422a-8e22-db3aa78915ea-config-data\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.894180 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38030366-fb21-422a-8e22-db3aa78915ea-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:12 crc kubenswrapper[4853]: I1209 17:24:12.953242 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.003569 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/38030366-fb21-422a-8e22-db3aa78915ea-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.003744 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38030366-fb21-422a-8e22-db3aa78915ea-config-data\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.003814 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38030366-fb21-422a-8e22-db3aa78915ea-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.003997 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbwvv\" (UniqueName: \"kubernetes.io/projected/38030366-fb21-422a-8e22-db3aa78915ea-kube-api-access-jbwvv\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.011125 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38030366-fb21-422a-8e22-db3aa78915ea-config-data\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.012341 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/38030366-fb21-422a-8e22-db3aa78915ea-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.017022 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38030366-fb21-422a-8e22-db3aa78915ea-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.027439 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbwvv\" (UniqueName: \"kubernetes.io/projected/38030366-fb21-422a-8e22-db3aa78915ea-kube-api-access-jbwvv\") pod \"mysqld-exporter-0\" (UID: \"38030366-fb21-422a-8e22-db3aa78915ea\") " pod="openstack/mysqld-exporter-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.037519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.485583 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.589457 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a55e7b-220c-44a9-acdd-ad06588c155e" path="/var/lib/kubelet/pods/62a55e7b-220c-44a9-acdd-ad06588c155e/volumes" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.589971 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a2bf6d-2b94-43ab-92e9-7a2355ae7df5" path="/var/lib/kubelet/pods/97a2bf6d-2b94-43ab-92e9-7a2355ae7df5/volumes" Dec 09 17:24:13 crc kubenswrapper[4853]: I1209 17:24:13.642078 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.118488 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.119118 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="ceilometer-central-agent" containerID="cri-o://d5c610012a8b7131ab4438259a745df256bb522a879b8f3f43f5a3126e163f67" gracePeriod=30 Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.119191 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="sg-core" containerID="cri-o://5557a8ed4d24736afde9e48642f7b6abd22d43f208550189969f19bc03cab7bf" gracePeriod=30 Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.119293 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="ceilometer-notification-agent" containerID="cri-o://c8eca91b3d10d751ba9044539a565ccf93bf2b6850494f542dc05613a111b270" gracePeriod=30 Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.119492 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="proxy-httpd" containerID="cri-o://7591ff75fe2fe274b6d74ae2f84daded079492052474677a40990897961cee13" gracePeriod=30 Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.410297 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fa877e16-b821-4ef2-8840-806f276e784c","Type":"ContainerStarted","Data":"831d2bf313577e7ed25a1cf019f323910c6f109f230b02e32c1bbc547220eada"} Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.410554 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fa877e16-b821-4ef2-8840-806f276e784c","Type":"ContainerStarted","Data":"31b319b00c6f6db48df7b40d0accf5fa76fbfbd041345839181fde620d776d87"} Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.410765 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.411642 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"38030366-fb21-422a-8e22-db3aa78915ea","Type":"ContainerStarted","Data":"31dd69ef54d69434e1c7b0df9c9ef650b1a9e9c38f3dc6e6173cc7056857713e"} Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.414044 4853 generic.go:334] "Generic (PLEG): container finished" podID="0703aded-0b68-4295-82b2-675c69e88c1f" containerID="7591ff75fe2fe274b6d74ae2f84daded079492052474677a40990897961cee13" exitCode=0 Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.414065 4853 generic.go:334] "Generic (PLEG): container finished" podID="0703aded-0b68-4295-82b2-675c69e88c1f" containerID="5557a8ed4d24736afde9e48642f7b6abd22d43f208550189969f19bc03cab7bf" exitCode=2 Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.414081 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerDied","Data":"7591ff75fe2fe274b6d74ae2f84daded079492052474677a40990897961cee13"} Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.414097 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerDied","Data":"5557a8ed4d24736afde9e48642f7b6abd22d43f208550189969f19bc03cab7bf"} Dec 09 17:24:14 crc kubenswrapper[4853]: I1209 17:24:14.438490 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.079868572 podStartE2EDuration="2.438474s" podCreationTimestamp="2025-12-09 17:24:12 +0000 UTC" firstStartedPulling="2025-12-09 17:24:13.491504736 +0000 UTC m=+1680.426243958" lastFinishedPulling="2025-12-09 17:24:13.850110204 +0000 UTC m=+1680.784849386" observedRunningTime="2025-12-09 17:24:14.429241037 +0000 UTC m=+1681.363980289" watchObservedRunningTime="2025-12-09 17:24:14.438474 +0000 UTC m=+1681.373213182" Dec 09 17:24:15 crc kubenswrapper[4853]: I1209 17:24:15.432246 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"38030366-fb21-422a-8e22-db3aa78915ea","Type":"ContainerStarted","Data":"7008c9b21f9b5f859a3716f5f82cfb5deed50c4c0f70abe1d000bb9a0ae28d87"} Dec 09 17:24:15 crc kubenswrapper[4853]: I1209 17:24:15.454714 4853 generic.go:334] "Generic (PLEG): container finished" podID="0703aded-0b68-4295-82b2-675c69e88c1f" containerID="d5c610012a8b7131ab4438259a745df256bb522a879b8f3f43f5a3126e163f67" exitCode=0 Dec 09 17:24:15 crc kubenswrapper[4853]: I1209 17:24:15.454798 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerDied","Data":"d5c610012a8b7131ab4438259a745df256bb522a879b8f3f43f5a3126e163f67"} Dec 09 17:24:15 crc kubenswrapper[4853]: I1209 17:24:15.459878 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.851884641 podStartE2EDuration="3.459861523s" podCreationTimestamp="2025-12-09 17:24:12 +0000 UTC" firstStartedPulling="2025-12-09 17:24:13.631798738 +0000 UTC m=+1680.566537920" lastFinishedPulling="2025-12-09 17:24:14.23977561 +0000 UTC m=+1681.174514802" observedRunningTime="2025-12-09 17:24:15.457422848 +0000 UTC m=+1682.392162030" watchObservedRunningTime="2025-12-09 17:24:15.459861523 +0000 UTC m=+1682.394600705" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.469237 4853 generic.go:334] "Generic (PLEG): container finished" podID="0703aded-0b68-4295-82b2-675c69e88c1f" containerID="c8eca91b3d10d751ba9044539a565ccf93bf2b6850494f542dc05613a111b270" exitCode=0 Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.469319 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerDied","Data":"c8eca91b3d10d751ba9044539a565ccf93bf2b6850494f542dc05613a111b270"} Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.469624 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0703aded-0b68-4295-82b2-675c69e88c1f","Type":"ContainerDied","Data":"f8df74ffa9a15ab6e35c3d33087292cd2230102ca094ce59fd95f87bf60c2a72"} Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.469638 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8df74ffa9a15ab6e35c3d33087292cd2230102ca094ce59fd95f87bf60c2a72" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.506733 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.507281 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.509719 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.524265 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.599958 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.682223 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-run-httpd\") pod \"0703aded-0b68-4295-82b2-675c69e88c1f\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.682326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-log-httpd\") pod \"0703aded-0b68-4295-82b2-675c69e88c1f\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.682405 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-combined-ca-bundle\") pod \"0703aded-0b68-4295-82b2-675c69e88c1f\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.682453 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s27l5\" (UniqueName: \"kubernetes.io/projected/0703aded-0b68-4295-82b2-675c69e88c1f-kube-api-access-s27l5\") pod \"0703aded-0b68-4295-82b2-675c69e88c1f\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.682479 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-sg-core-conf-yaml\") pod \"0703aded-0b68-4295-82b2-675c69e88c1f\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.682541 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-scripts\") pod \"0703aded-0b68-4295-82b2-675c69e88c1f\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.682704 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-config-data\") pod \"0703aded-0b68-4295-82b2-675c69e88c1f\" (UID: \"0703aded-0b68-4295-82b2-675c69e88c1f\") " Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.683929 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0703aded-0b68-4295-82b2-675c69e88c1f" (UID: "0703aded-0b68-4295-82b2-675c69e88c1f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.684184 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0703aded-0b68-4295-82b2-675c69e88c1f" (UID: "0703aded-0b68-4295-82b2-675c69e88c1f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.691318 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-scripts" (OuterVolumeSpecName: "scripts") pod "0703aded-0b68-4295-82b2-675c69e88c1f" (UID: "0703aded-0b68-4295-82b2-675c69e88c1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.697908 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0703aded-0b68-4295-82b2-675c69e88c1f-kube-api-access-s27l5" (OuterVolumeSpecName: "kube-api-access-s27l5") pod "0703aded-0b68-4295-82b2-675c69e88c1f" (UID: "0703aded-0b68-4295-82b2-675c69e88c1f"). InnerVolumeSpecName "kube-api-access-s27l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.750751 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.754512 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0703aded-0b68-4295-82b2-675c69e88c1f" (UID: "0703aded-0b68-4295-82b2-675c69e88c1f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.762425 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.777390 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.785683 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s27l5\" (UniqueName: \"kubernetes.io/projected/0703aded-0b68-4295-82b2-675c69e88c1f-kube-api-access-s27l5\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.785716 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.785729 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.785741 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.785846 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0703aded-0b68-4295-82b2-675c69e88c1f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.830762 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0703aded-0b68-4295-82b2-675c69e88c1f" (UID: "0703aded-0b68-4295-82b2-675c69e88c1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.839461 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-config-data" (OuterVolumeSpecName: "config-data") pod "0703aded-0b68-4295-82b2-675c69e88c1f" (UID: "0703aded-0b68-4295-82b2-675c69e88c1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.888432 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:16 crc kubenswrapper[4853]: I1209 17:24:16.888466 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0703aded-0b68-4295-82b2-675c69e88c1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.479761 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.480840 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.487540 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.487916 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.616269 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.662329 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.677825 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:24:17 crc kubenswrapper[4853]: E1209 17:24:17.678367 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="proxy-httpd" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.678388 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="proxy-httpd" Dec 09 17:24:17 crc kubenswrapper[4853]: E1209 17:24:17.678417 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="sg-core" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.678424 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="sg-core" Dec 09 17:24:17 crc kubenswrapper[4853]: E1209 17:24:17.680308 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="ceilometer-notification-agent" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.680348 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="ceilometer-notification-agent" Dec 09 17:24:17 crc kubenswrapper[4853]: E1209 17:24:17.680370 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="ceilometer-central-agent" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.680377 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="ceilometer-central-agent" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.680809 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="ceilometer-notification-agent" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.680832 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="sg-core" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.680842 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="proxy-httpd" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.680866 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" containerName="ceilometer-central-agent" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.683529 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.687555 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.687767 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.687798 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.691390 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.836555 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-scripts\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.836929 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-config-data\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.836949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.837509 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd6hf\" (UniqueName: \"kubernetes.io/projected/08b38ce0-38fe-4eda-9d04-79e101fb1be2-kube-api-access-nd6hf\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.837589 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.837698 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.838106 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-run-httpd\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.838302 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-log-httpd\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.940381 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd6hf\" (UniqueName: \"kubernetes.io/projected/08b38ce0-38fe-4eda-9d04-79e101fb1be2-kube-api-access-nd6hf\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.940443 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.940485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.940557 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-run-httpd\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.940628 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-log-httpd\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.940654 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-scripts\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.940672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-config-data\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.940687 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.941283 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-log-httpd\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.942870 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-run-httpd\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.946939 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-scripts\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.947105 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.947364 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.947800 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.953995 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-config-data\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:17 crc kubenswrapper[4853]: I1209 17:24:17.959687 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd6hf\" (UniqueName: \"kubernetes.io/projected/08b38ce0-38fe-4eda-9d04-79e101fb1be2-kube-api-access-nd6hf\") pod \"ceilometer-0\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " pod="openstack/ceilometer-0" Dec 09 17:24:18 crc kubenswrapper[4853]: I1209 17:24:18.020434 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:24:18 crc kubenswrapper[4853]: I1209 17:24:18.535502 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:24:18 crc kubenswrapper[4853]: W1209 17:24:18.541484 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08b38ce0_38fe_4eda_9d04_79e101fb1be2.slice/crio-529bb9b1f008761fc20c0d28d27434f37df90026ea32a328e8a8d116b1e5dcb7 WatchSource:0}: Error finding container 529bb9b1f008761fc20c0d28d27434f37df90026ea32a328e8a8d116b1e5dcb7: Status 404 returned error can't find the container with id 529bb9b1f008761fc20c0d28d27434f37df90026ea32a328e8a8d116b1e5dcb7 Dec 09 17:24:19 crc kubenswrapper[4853]: I1209 17:24:19.503347 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerStarted","Data":"529bb9b1f008761fc20c0d28d27434f37df90026ea32a328e8a8d116b1e5dcb7"} Dec 09 17:24:19 crc kubenswrapper[4853]: I1209 17:24:19.581411 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0703aded-0b68-4295-82b2-675c69e88c1f" path="/var/lib/kubelet/pods/0703aded-0b68-4295-82b2-675c69e88c1f/volumes" Dec 09 17:24:20 crc kubenswrapper[4853]: I1209 17:24:20.545951 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerStarted","Data":"1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd"} Dec 09 17:24:21 crc kubenswrapper[4853]: I1209 17:24:21.594200 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerStarted","Data":"e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e"} Dec 09 17:24:22 crc kubenswrapper[4853]: I1209 17:24:22.567399 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:24:22 crc kubenswrapper[4853]: E1209 17:24:22.567972 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:24:22 crc kubenswrapper[4853]: I1209 17:24:22.587467 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerStarted","Data":"a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e"} Dec 09 17:24:22 crc kubenswrapper[4853]: I1209 17:24:22.966244 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 17:24:24 crc kubenswrapper[4853]: I1209 17:24:24.613694 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerStarted","Data":"c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5"} Dec 09 17:24:24 crc kubenswrapper[4853]: I1209 17:24:24.614306 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:24:24 crc kubenswrapper[4853]: I1209 17:24:24.646546 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.380876102 podStartE2EDuration="7.646526663s" podCreationTimestamp="2025-12-09 17:24:17 +0000 UTC" firstStartedPulling="2025-12-09 17:24:18.545981518 +0000 UTC m=+1685.480720710" lastFinishedPulling="2025-12-09 17:24:23.811632079 +0000 UTC m=+1690.746371271" observedRunningTime="2025-12-09 17:24:24.634938808 +0000 UTC m=+1691.569677990" watchObservedRunningTime="2025-12-09 17:24:24.646526663 +0000 UTC m=+1691.581265845" Dec 09 17:24:34 crc kubenswrapper[4853]: I1209 17:24:34.567753 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:24:34 crc kubenswrapper[4853]: E1209 17:24:34.568554 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:24:45 crc kubenswrapper[4853]: I1209 17:24:45.567820 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:24:45 crc kubenswrapper[4853]: E1209 17:24:45.568801 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:24:48 crc kubenswrapper[4853]: I1209 17:24:48.063941 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 17:24:56 crc kubenswrapper[4853]: I1209 17:24:56.567676 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:24:56 crc kubenswrapper[4853]: E1209 17:24:56.568623 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.640233 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-k5v9z"] Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.653580 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-k5v9z"] Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.756869 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-52zlg"] Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.759569 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.785356 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-52zlg"] Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.846146 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3819bec9-a99d-4c1a-a387-3f0dff9f4b1d-combined-ca-bundle\") pod \"heat-db-sync-52zlg\" (UID: \"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d\") " pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.846539 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkwg5\" (UniqueName: \"kubernetes.io/projected/3819bec9-a99d-4c1a-a387-3f0dff9f4b1d-kube-api-access-nkwg5\") pod \"heat-db-sync-52zlg\" (UID: \"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d\") " pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.846699 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3819bec9-a99d-4c1a-a387-3f0dff9f4b1d-config-data\") pod \"heat-db-sync-52zlg\" (UID: \"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d\") " pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.949063 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkwg5\" (UniqueName: \"kubernetes.io/projected/3819bec9-a99d-4c1a-a387-3f0dff9f4b1d-kube-api-access-nkwg5\") pod \"heat-db-sync-52zlg\" (UID: \"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d\") " pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.949158 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3819bec9-a99d-4c1a-a387-3f0dff9f4b1d-config-data\") pod \"heat-db-sync-52zlg\" (UID: \"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d\") " pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.949238 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3819bec9-a99d-4c1a-a387-3f0dff9f4b1d-combined-ca-bundle\") pod \"heat-db-sync-52zlg\" (UID: \"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d\") " pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.962388 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3819bec9-a99d-4c1a-a387-3f0dff9f4b1d-combined-ca-bundle\") pod \"heat-db-sync-52zlg\" (UID: \"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d\") " pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.962555 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3819bec9-a99d-4c1a-a387-3f0dff9f4b1d-config-data\") pod \"heat-db-sync-52zlg\" (UID: \"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d\") " pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:00 crc kubenswrapper[4853]: I1209 17:25:00.966639 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkwg5\" (UniqueName: \"kubernetes.io/projected/3819bec9-a99d-4c1a-a387-3f0dff9f4b1d-kube-api-access-nkwg5\") pod \"heat-db-sync-52zlg\" (UID: \"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d\") " pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:01 crc kubenswrapper[4853]: I1209 17:25:01.090488 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-52zlg" Dec 09 17:25:01 crc kubenswrapper[4853]: I1209 17:25:01.579746 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20655566-5ed0-4732-835a-0bd04a51988f" path="/var/lib/kubelet/pods/20655566-5ed0-4732-835a-0bd04a51988f/volumes" Dec 09 17:25:01 crc kubenswrapper[4853]: I1209 17:25:01.618351 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:25:01 crc kubenswrapper[4853]: I1209 17:25:01.627402 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-52zlg"] Dec 09 17:25:01 crc kubenswrapper[4853]: E1209 17:25:01.725938 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:25:01 crc kubenswrapper[4853]: E1209 17:25:01.726003 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:25:01 crc kubenswrapper[4853]: E1209 17:25:01.726189 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:25:01 crc kubenswrapper[4853]: E1209 17:25:01.727808 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:25:02 crc kubenswrapper[4853]: I1209 17:25:02.064249 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-52zlg" event={"ID":"3819bec9-a99d-4c1a-a387-3f0dff9f4b1d","Type":"ContainerStarted","Data":"cdf0855ede8ef1e9f5aa97ec5a36d11436ba2bd651a811c0e36b79767bd0933f"} Dec 09 17:25:02 crc kubenswrapper[4853]: E1209 17:25:02.065821 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:25:02 crc kubenswrapper[4853]: I1209 17:25:02.577227 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:25:02 crc kubenswrapper[4853]: I1209 17:25:02.578053 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="ceilometer-central-agent" containerID="cri-o://1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd" gracePeriod=30 Dec 09 17:25:02 crc kubenswrapper[4853]: I1209 17:25:02.578217 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="proxy-httpd" containerID="cri-o://c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5" gracePeriod=30 Dec 09 17:25:02 crc kubenswrapper[4853]: I1209 17:25:02.578275 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="sg-core" containerID="cri-o://a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e" gracePeriod=30 Dec 09 17:25:02 crc kubenswrapper[4853]: I1209 17:25:02.578318 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="ceilometer-notification-agent" containerID="cri-o://e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e" gracePeriod=30 Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.081265 4853 generic.go:334] "Generic (PLEG): container finished" podID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerID="c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5" exitCode=0 Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.081300 4853 generic.go:334] "Generic (PLEG): container finished" podID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerID="a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e" exitCode=2 Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.081309 4853 generic.go:334] "Generic (PLEG): container finished" podID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerID="1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd" exitCode=0 Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.081315 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerDied","Data":"c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5"} Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.081368 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerDied","Data":"a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e"} Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.081407 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerDied","Data":"1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd"} Dec 09 17:25:03 crc kubenswrapper[4853]: E1209 17:25:03.084794 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.719427 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.756277 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.816746 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.938093 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-scripts\") pod \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.938267 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-combined-ca-bundle\") pod \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.938298 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-log-httpd\") pod \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.938380 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-sg-core-conf-yaml\") pod \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.938421 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd6hf\" (UniqueName: \"kubernetes.io/projected/08b38ce0-38fe-4eda-9d04-79e101fb1be2-kube-api-access-nd6hf\") pod \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.938515 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-ceilometer-tls-certs\") pod \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.938581 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-config-data\") pod \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.938681 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-run-httpd\") pod \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\" (UID: \"08b38ce0-38fe-4eda-9d04-79e101fb1be2\") " Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.944849 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "08b38ce0-38fe-4eda-9d04-79e101fb1be2" (UID: "08b38ce0-38fe-4eda-9d04-79e101fb1be2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.946381 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "08b38ce0-38fe-4eda-9d04-79e101fb1be2" (UID: "08b38ce0-38fe-4eda-9d04-79e101fb1be2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.949749 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-scripts" (OuterVolumeSpecName: "scripts") pod "08b38ce0-38fe-4eda-9d04-79e101fb1be2" (UID: "08b38ce0-38fe-4eda-9d04-79e101fb1be2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.959779 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b38ce0-38fe-4eda-9d04-79e101fb1be2-kube-api-access-nd6hf" (OuterVolumeSpecName: "kube-api-access-nd6hf") pod "08b38ce0-38fe-4eda-9d04-79e101fb1be2" (UID: "08b38ce0-38fe-4eda-9d04-79e101fb1be2"). InnerVolumeSpecName "kube-api-access-nd6hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:25:03 crc kubenswrapper[4853]: I1209 17:25:03.995315 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "08b38ce0-38fe-4eda-9d04-79e101fb1be2" (UID: "08b38ce0-38fe-4eda-9d04-79e101fb1be2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.023700 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "08b38ce0-38fe-4eda-9d04-79e101fb1be2" (UID: "08b38ce0-38fe-4eda-9d04-79e101fb1be2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.041411 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.041443 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.041452 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.041462 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08b38ce0-38fe-4eda-9d04-79e101fb1be2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.041471 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.041480 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd6hf\" (UniqueName: \"kubernetes.io/projected/08b38ce0-38fe-4eda-9d04-79e101fb1be2-kube-api-access-nd6hf\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.056826 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08b38ce0-38fe-4eda-9d04-79e101fb1be2" (UID: "08b38ce0-38fe-4eda-9d04-79e101fb1be2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.107910 4853 generic.go:334] "Generic (PLEG): container finished" podID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerID="e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e" exitCode=0 Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.108268 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.108868 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerDied","Data":"e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e"} Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.109289 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08b38ce0-38fe-4eda-9d04-79e101fb1be2","Type":"ContainerDied","Data":"529bb9b1f008761fc20c0d28d27434f37df90026ea32a328e8a8d116b1e5dcb7"} Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.109307 4853 scope.go:117] "RemoveContainer" containerID="c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.146579 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.149930 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-config-data" (OuterVolumeSpecName: "config-data") pod "08b38ce0-38fe-4eda-9d04-79e101fb1be2" (UID: "08b38ce0-38fe-4eda-9d04-79e101fb1be2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.199551 4853 scope.go:117] "RemoveContainer" containerID="a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.232801 4853 scope.go:117] "RemoveContainer" containerID="e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.249378 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b38ce0-38fe-4eda-9d04-79e101fb1be2-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.260711 4853 scope.go:117] "RemoveContainer" containerID="1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.281371 4853 scope.go:117] "RemoveContainer" containerID="c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5" Dec 09 17:25:04 crc kubenswrapper[4853]: E1209 17:25:04.283178 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5\": container with ID starting with c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5 not found: ID does not exist" containerID="c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.283356 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5"} err="failed to get container status \"c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5\": rpc error: code = NotFound desc = could not find container \"c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5\": container with ID starting with c3fcac0c2bc070c88eff1f39a005ca2a20fa5112cd01f1583f11fa0361ca84a5 not found: ID does not exist" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.283487 4853 scope.go:117] "RemoveContainer" containerID="a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e" Dec 09 17:25:04 crc kubenswrapper[4853]: E1209 17:25:04.284143 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e\": container with ID starting with a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e not found: ID does not exist" containerID="a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.284183 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e"} err="failed to get container status \"a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e\": rpc error: code = NotFound desc = could not find container \"a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e\": container with ID starting with a4bdaf768a254354cb1757cb0a0d20a0987eb919ad9d3b894389077a6f32a14e not found: ID does not exist" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.284214 4853 scope.go:117] "RemoveContainer" containerID="e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e" Dec 09 17:25:04 crc kubenswrapper[4853]: E1209 17:25:04.284475 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e\": container with ID starting with e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e not found: ID does not exist" containerID="e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.284496 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e"} err="failed to get container status \"e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e\": rpc error: code = NotFound desc = could not find container \"e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e\": container with ID starting with e233661eec736e0c3a8023247d3a04a2be2f15072e5d641a414bb619214f151e not found: ID does not exist" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.284509 4853 scope.go:117] "RemoveContainer" containerID="1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd" Dec 09 17:25:04 crc kubenswrapper[4853]: E1209 17:25:04.284805 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd\": container with ID starting with 1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd not found: ID does not exist" containerID="1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.284826 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd"} err="failed to get container status \"1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd\": rpc error: code = NotFound desc = could not find container \"1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd\": container with ID starting with 1fcf968e0d823ee39c0b098a80c1cdf89ba2c6ac399620f8c2a36a97040ddcbd not found: ID does not exist" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.450581 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.465379 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.496492 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:25:04 crc kubenswrapper[4853]: E1209 17:25:04.497321 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="ceilometer-notification-agent" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.497353 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="ceilometer-notification-agent" Dec 09 17:25:04 crc kubenswrapper[4853]: E1209 17:25:04.497390 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="sg-core" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.497399 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="sg-core" Dec 09 17:25:04 crc kubenswrapper[4853]: E1209 17:25:04.497429 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="ceilometer-central-agent" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.497438 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="ceilometer-central-agent" Dec 09 17:25:04 crc kubenswrapper[4853]: E1209 17:25:04.497460 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="proxy-httpd" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.497467 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="proxy-httpd" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.497863 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="ceilometer-notification-agent" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.497908 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="proxy-httpd" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.497932 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="ceilometer-central-agent" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.497951 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" containerName="sg-core" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.500994 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.506186 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.506441 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.506658 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.510122 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.555315 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558g5\" (UniqueName: \"kubernetes.io/projected/6e815965-15fe-4f84-8eb4-133f91163a08-kube-api-access-558g5\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.555400 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.555446 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-config-data\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.555463 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-scripts\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.555556 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e815965-15fe-4f84-8eb4-133f91163a08-log-httpd\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.555590 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.555630 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.555665 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e815965-15fe-4f84-8eb4-133f91163a08-run-httpd\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.658799 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e815965-15fe-4f84-8eb4-133f91163a08-log-httpd\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.658907 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.658977 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.659057 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e815965-15fe-4f84-8eb4-133f91163a08-run-httpd\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.659187 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-558g5\" (UniqueName: \"kubernetes.io/projected/6e815965-15fe-4f84-8eb4-133f91163a08-kube-api-access-558g5\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.659381 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.659514 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-config-data\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.659565 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-scripts\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.661923 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e815965-15fe-4f84-8eb4-133f91163a08-run-httpd\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.664939 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e815965-15fe-4f84-8eb4-133f91163a08-log-httpd\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.667006 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-scripts\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.669522 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-config-data\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.672421 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.672819 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.678272 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e815965-15fe-4f84-8eb4-133f91163a08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.681528 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-558g5\" (UniqueName: \"kubernetes.io/projected/6e815965-15fe-4f84-8eb4-133f91163a08-kube-api-access-558g5\") pod \"ceilometer-0\" (UID: \"6e815965-15fe-4f84-8eb4-133f91163a08\") " pod="openstack/ceilometer-0" Dec 09 17:25:04 crc kubenswrapper[4853]: I1209 17:25:04.823686 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 17:25:05 crc kubenswrapper[4853]: I1209 17:25:05.505209 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 17:25:05 crc kubenswrapper[4853]: W1209 17:25:05.508552 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e815965_15fe_4f84_8eb4_133f91163a08.slice/crio-3628dd466d28ec7358f69982164d2ba6f2cda2cafd0caddcd52dc626b6309e41 WatchSource:0}: Error finding container 3628dd466d28ec7358f69982164d2ba6f2cda2cafd0caddcd52dc626b6309e41: Status 404 returned error can't find the container with id 3628dd466d28ec7358f69982164d2ba6f2cda2cafd0caddcd52dc626b6309e41 Dec 09 17:25:05 crc kubenswrapper[4853]: I1209 17:25:05.584223 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b38ce0-38fe-4eda-9d04-79e101fb1be2" path="/var/lib/kubelet/pods/08b38ce0-38fe-4eda-9d04-79e101fb1be2/volumes" Dec 09 17:25:05 crc kubenswrapper[4853]: E1209 17:25:05.636131 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:25:05 crc kubenswrapper[4853]: E1209 17:25:05.636256 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:25:05 crc kubenswrapper[4853]: E1209 17:25:05.636494 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:25:06 crc kubenswrapper[4853]: I1209 17:25:06.148004 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e815965-15fe-4f84-8eb4-133f91163a08","Type":"ContainerStarted","Data":"3628dd466d28ec7358f69982164d2ba6f2cda2cafd0caddcd52dc626b6309e41"} Dec 09 17:25:07 crc kubenswrapper[4853]: I1209 17:25:07.173558 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e815965-15fe-4f84-8eb4-133f91163a08","Type":"ContainerStarted","Data":"0f1ea119bd6e7729ae15bda9025acdcbe497fec6c1f686482ff772abcb1c4653"} Dec 09 17:25:08 crc kubenswrapper[4853]: I1209 17:25:08.191143 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e815965-15fe-4f84-8eb4-133f91163a08","Type":"ContainerStarted","Data":"40c07fb05fdaec4c9d3ba04cfed7da82a5c9547ed82a4718d4711825f9c66f83"} Dec 09 17:25:08 crc kubenswrapper[4853]: I1209 17:25:08.530621 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="03a2cb4e-7efc-4040-a115-db55575800e5" containerName="rabbitmq" containerID="cri-o://8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b" gracePeriod=604796 Dec 09 17:25:08 crc kubenswrapper[4853]: I1209 17:25:08.567544 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:25:08 crc kubenswrapper[4853]: E1209 17:25:08.567973 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:25:08 crc kubenswrapper[4853]: I1209 17:25:08.862365 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" containerName="rabbitmq" containerID="cri-o://ac0fd5259f9efa3d8d6a09fb258b3fcb49f0c6f25ce4ec2dbb972cd65109ec37" gracePeriod=604796 Dec 09 17:25:09 crc kubenswrapper[4853]: E1209 17:25:09.079590 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:25:09 crc kubenswrapper[4853]: I1209 17:25:09.204319 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e815965-15fe-4f84-8eb4-133f91163a08","Type":"ContainerStarted","Data":"dc50816b85ff752e4a1fa31d71906376101186c596f242ca6212079fa3999bd6"} Dec 09 17:25:09 crc kubenswrapper[4853]: I1209 17:25:09.204523 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 17:25:09 crc kubenswrapper[4853]: E1209 17:25:09.206004 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:25:10 crc kubenswrapper[4853]: E1209 17:25:10.243548 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:25:11 crc kubenswrapper[4853]: I1209 17:25:11.505973 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Dec 09 17:25:11 crc kubenswrapper[4853]: I1209 17:25:11.822572 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="03a2cb4e-7efc-4040-a115-db55575800e5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Dec 09 17:25:14 crc kubenswrapper[4853]: E1209 17:25:14.706654 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:25:14 crc kubenswrapper[4853]: E1209 17:25:14.712669 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:25:14 crc kubenswrapper[4853]: E1209 17:25:14.712929 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:25:14 crc kubenswrapper[4853]: E1209 17:25:14.715084 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.179206 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.304712 4853 generic.go:334] "Generic (PLEG): container finished" podID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" containerID="ac0fd5259f9efa3d8d6a09fb258b3fcb49f0c6f25ce4ec2dbb972cd65109ec37" exitCode=0 Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.304798 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96900f2e-a2ad-47fe-be9b-7b6a924ded82","Type":"ContainerDied","Data":"ac0fd5259f9efa3d8d6a09fb258b3fcb49f0c6f25ce4ec2dbb972cd65109ec37"} Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.307032 4853 generic.go:334] "Generic (PLEG): container finished" podID="03a2cb4e-7efc-4040-a115-db55575800e5" containerID="8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b" exitCode=0 Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.307062 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03a2cb4e-7efc-4040-a115-db55575800e5","Type":"ContainerDied","Data":"8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b"} Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.307082 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"03a2cb4e-7efc-4040-a115-db55575800e5","Type":"ContainerDied","Data":"86c345071e94f19007dd74127d42f586e8093f4d1d6570dad7c1ccbd053b0124"} Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.307099 4853 scope.go:117] "RemoveContainer" containerID="8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.307264 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.338781 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-config-data\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.338841 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-server-conf\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.338903 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03a2cb4e-7efc-4040-a115-db55575800e5-pod-info\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.338994 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-plugins-conf\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.339079 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-erlang-cookie\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.339170 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-confd\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.339196 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8qx2\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-kube-api-access-h8qx2\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.339243 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-tls\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.339349 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.339377 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03a2cb4e-7efc-4040-a115-db55575800e5-erlang-cookie-secret\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.339422 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-plugins\") pod \"03a2cb4e-7efc-4040-a115-db55575800e5\" (UID: \"03a2cb4e-7efc-4040-a115-db55575800e5\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.341052 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.343635 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.343695 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.346722 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a2cb4e-7efc-4040-a115-db55575800e5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.347186 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.353825 4853 scope.go:117] "RemoveContainer" containerID="21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.354391 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-kube-api-access-h8qx2" (OuterVolumeSpecName: "kube-api-access-h8qx2") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "kube-api-access-h8qx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.358761 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.377793 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/03a2cb4e-7efc-4040-a115-db55575800e5-pod-info" (OuterVolumeSpecName: "pod-info") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.417347 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-config-data" (OuterVolumeSpecName: "config-data") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.443974 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.444014 4853 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03a2cb4e-7efc-4040-a115-db55575800e5-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.444028 4853 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.444041 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.444060 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8qx2\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-kube-api-access-h8qx2\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.444069 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.444101 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.444113 4853 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03a2cb4e-7efc-4040-a115-db55575800e5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.444124 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.505813 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.512696 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.531193 4853 scope.go:117] "RemoveContainer" containerID="8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b" Dec 09 17:25:15 crc kubenswrapper[4853]: E1209 17:25:15.538098 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b\": container with ID starting with 8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b not found: ID does not exist" containerID="8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.538145 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b"} err="failed to get container status \"8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b\": rpc error: code = NotFound desc = could not find container \"8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b\": container with ID starting with 8287ca71d93c9a9d7171260fd42c769ba9f077287fae5fae1f8b559ab9362f6b not found: ID does not exist" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.538173 4853 scope.go:117] "RemoveContainer" containerID="21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.541567 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-server-conf" (OuterVolumeSpecName: "server-conf") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: E1209 17:25:15.586249 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4\": container with ID starting with 21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4 not found: ID does not exist" containerID="21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.586301 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4"} err="failed to get container status \"21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4\": rpc error: code = NotFound desc = could not find container \"21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4\": container with ID starting with 21d55c6fccecec1d19a8e37dd393e4f5e54c0615e1f939e8902578a8109368f4 not found: ID does not exist" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594330 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-plugins\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594397 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96900f2e-a2ad-47fe-be9b-7b6a924ded82-pod-info\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594477 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-erlang-cookie\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594555 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96900f2e-a2ad-47fe-be9b-7b6a924ded82-erlang-cookie-secret\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594581 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594661 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-plugins-conf\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594715 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-confd\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594735 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-tls\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594760 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-config-data\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594863 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwwtk\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-kube-api-access-fwwtk\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.594885 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-server-conf\") pod \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\" (UID: \"96900f2e-a2ad-47fe-be9b-7b6a924ded82\") " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.595442 4853 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03a2cb4e-7efc-4040-a115-db55575800e5-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.595458 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.599288 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.601160 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.601929 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.605806 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.613902 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96900f2e-a2ad-47fe-be9b-7b6a924ded82-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.614043 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/96900f2e-a2ad-47fe-be9b-7b6a924ded82-pod-info" (OuterVolumeSpecName: "pod-info") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.616538 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.617979 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-kube-api-access-fwwtk" (OuterVolumeSpecName: "kube-api-access-fwwtk") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "kube-api-access-fwwtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.624954 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "03a2cb4e-7efc-4040-a115-db55575800e5" (UID: "03a2cb4e-7efc-4040-a115-db55575800e5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.698927 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03a2cb4e-7efc-4040-a115-db55575800e5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.698962 4853 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.698974 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.698984 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwwtk\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-kube-api-access-fwwtk\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.698997 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.699008 4853 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96900f2e-a2ad-47fe-be9b-7b6a924ded82-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.699022 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.699034 4853 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96900f2e-a2ad-47fe-be9b-7b6a924ded82-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.699057 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.711815 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-config-data" (OuterVolumeSpecName: "config-data") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.726582 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-server-conf" (OuterVolumeSpecName: "server-conf") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.761825 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.801522 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.801565 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.801577 4853 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96900f2e-a2ad-47fe-be9b-7b6a924ded82-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.817762 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "96900f2e-a2ad-47fe-be9b-7b6a924ded82" (UID: "96900f2e-a2ad-47fe-be9b-7b6a924ded82"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.904222 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96900f2e-a2ad-47fe-be9b-7b6a924ded82-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.948540 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.983545 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.996781 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 17:25:15 crc kubenswrapper[4853]: E1209 17:25:15.997515 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" containerName="rabbitmq" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.997543 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" containerName="rabbitmq" Dec 09 17:25:15 crc kubenswrapper[4853]: E1209 17:25:15.997575 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a2cb4e-7efc-4040-a115-db55575800e5" containerName="setup-container" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.997585 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a2cb4e-7efc-4040-a115-db55575800e5" containerName="setup-container" Dec 09 17:25:15 crc kubenswrapper[4853]: E1209 17:25:15.997642 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" containerName="setup-container" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.997652 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" containerName="setup-container" Dec 09 17:25:15 crc kubenswrapper[4853]: E1209 17:25:15.997667 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a2cb4e-7efc-4040-a115-db55575800e5" containerName="rabbitmq" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.997675 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a2cb4e-7efc-4040-a115-db55575800e5" containerName="rabbitmq" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.997991 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" containerName="rabbitmq" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.998026 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a2cb4e-7efc-4040-a115-db55575800e5" containerName="rabbitmq" Dec 09 17:25:15 crc kubenswrapper[4853]: I1209 17:25:15.999681 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.007508 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.009574 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.009805 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.010342 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.010546 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.010682 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.010913 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.010992 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-h9wq5" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.110353 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fe91677e-e106-4624-a45e-45111c868559-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.110438 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462kl\" (UniqueName: \"kubernetes.io/projected/fe91677e-e106-4624-a45e-45111c868559-kube-api-access-462kl\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.110468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fe91677e-e106-4624-a45e-45111c868559-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.110502 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fe91677e-e106-4624-a45e-45111c868559-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.110757 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.110832 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.111082 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.111108 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.111199 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.111234 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fe91677e-e106-4624-a45e-45111c868559-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.111265 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe91677e-e106-4624-a45e-45111c868559-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.213475 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fe91677e-e106-4624-a45e-45111c868559-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.213585 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-462kl\" (UniqueName: \"kubernetes.io/projected/fe91677e-e106-4624-a45e-45111c868559-kube-api-access-462kl\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.213698 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fe91677e-e106-4624-a45e-45111c868559-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.213766 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fe91677e-e106-4624-a45e-45111c868559-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.213895 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.213967 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.214099 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.214131 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.214203 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.214242 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fe91677e-e106-4624-a45e-45111c868559-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.214274 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe91677e-e106-4624-a45e-45111c868559-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.214851 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.215100 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.215351 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.215420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fe91677e-e106-4624-a45e-45111c868559-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.215843 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe91677e-e106-4624-a45e-45111c868559-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.215877 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fe91677e-e106-4624-a45e-45111c868559-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.218014 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.218558 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fe91677e-e106-4624-a45e-45111c868559-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.219581 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fe91677e-e106-4624-a45e-45111c868559-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.230450 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fe91677e-e106-4624-a45e-45111c868559-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.233254 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-462kl\" (UniqueName: \"kubernetes.io/projected/fe91677e-e106-4624-a45e-45111c868559-kube-api-access-462kl\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.260766 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fe91677e-e106-4624-a45e-45111c868559\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.319558 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96900f2e-a2ad-47fe-be9b-7b6a924ded82","Type":"ContainerDied","Data":"95f63897c683b3a56d95f3c1a0a1d25e48f29708bfffe7b6e3e27750a4b23f65"} Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.319609 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.319653 4853 scope.go:117] "RemoveContainer" containerID="ac0fd5259f9efa3d8d6a09fb258b3fcb49f0c6f25ce4ec2dbb972cd65109ec37" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.320487 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.385746 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.386849 4853 scope.go:117] "RemoveContainer" containerID="9d3feb5a12e69400f9270b312552b50129b059d8c865fffbce24185943e545b6" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.424221 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.453829 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.456644 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.459413 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.459448 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.459481 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.459506 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.459725 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5m645" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.459756 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.463524 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.470322 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.622411 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.622546 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.622573 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bktm7\" (UniqueName: \"kubernetes.io/projected/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-kube-api-access-bktm7\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.622763 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.622899 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.623047 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.623222 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.623336 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.623400 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.623512 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.623542 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.725637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.725722 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.725762 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.725815 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.725834 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.725887 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.725954 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.725979 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bktm7\" (UniqueName: \"kubernetes.io/projected/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-kube-api-access-bktm7\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.726004 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.726048 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.726105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.726507 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.726903 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.726950 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.727837 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.728197 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.728265 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.733459 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.733714 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.733707 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.735049 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.744174 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bktm7\" (UniqueName: \"kubernetes.io/projected/2ce495e5-4db9-457d-a5c9-eb39308cbcd2-kube-api-access-bktm7\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.768188 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce495e5-4db9-457d-a5c9-eb39308cbcd2\") " pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.787769 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 17:25:16 crc kubenswrapper[4853]: W1209 17:25:16.861869 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe91677e_e106_4624_a45e_45111c868559.slice/crio-7ed73df81572a44e8c4fded722f25ad94bddf5902a91f1d2eb84924e342dc1ce WatchSource:0}: Error finding container 7ed73df81572a44e8c4fded722f25ad94bddf5902a91f1d2eb84924e342dc1ce: Status 404 returned error can't find the container with id 7ed73df81572a44e8c4fded722f25ad94bddf5902a91f1d2eb84924e342dc1ce Dec 09 17:25:16 crc kubenswrapper[4853]: I1209 17:25:16.862801 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 17:25:17 crc kubenswrapper[4853]: I1209 17:25:17.269989 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 17:25:17 crc kubenswrapper[4853]: W1209 17:25:17.270186 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce495e5_4db9_457d_a5c9_eb39308cbcd2.slice/crio-d63febbbe733cde023c5599f5a998a7e9fe8fd7347508ca90516c09b5502226b WatchSource:0}: Error finding container d63febbbe733cde023c5599f5a998a7e9fe8fd7347508ca90516c09b5502226b: Status 404 returned error can't find the container with id d63febbbe733cde023c5599f5a998a7e9fe8fd7347508ca90516c09b5502226b Dec 09 17:25:17 crc kubenswrapper[4853]: I1209 17:25:17.347850 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce495e5-4db9-457d-a5c9-eb39308cbcd2","Type":"ContainerStarted","Data":"d63febbbe733cde023c5599f5a998a7e9fe8fd7347508ca90516c09b5502226b"} Dec 09 17:25:17 crc kubenswrapper[4853]: I1209 17:25:17.350768 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe91677e-e106-4624-a45e-45111c868559","Type":"ContainerStarted","Data":"7ed73df81572a44e8c4fded722f25ad94bddf5902a91f1d2eb84924e342dc1ce"} Dec 09 17:25:17 crc kubenswrapper[4853]: I1209 17:25:17.581832 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a2cb4e-7efc-4040-a115-db55575800e5" path="/var/lib/kubelet/pods/03a2cb4e-7efc-4040-a115-db55575800e5/volumes" Dec 09 17:25:17 crc kubenswrapper[4853]: I1209 17:25:17.583067 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96900f2e-a2ad-47fe-be9b-7b6a924ded82" path="/var/lib/kubelet/pods/96900f2e-a2ad-47fe-be9b-7b6a924ded82/volumes" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.072007 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gsjfm"] Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.075761 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.094455 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gsjfm"] Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.097797 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.162105 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.162172 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqg4k\" (UniqueName: \"kubernetes.io/projected/adfce42f-8efb-4ccb-b49f-81624a6963de-kube-api-access-hqg4k\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.162246 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.162346 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-config\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.162428 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.162452 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.162477 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.264719 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-config\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.264858 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.264890 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.264927 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.264945 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.264977 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqg4k\" (UniqueName: \"kubernetes.io/projected/adfce42f-8efb-4ccb-b49f-81624a6963de-kube-api-access-hqg4k\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.265045 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.265691 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-config\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.265890 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.266257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.266412 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.266886 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.267075 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.386637 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqg4k\" (UniqueName: \"kubernetes.io/projected/adfce42f-8efb-4ccb-b49f-81624a6963de-kube-api-access-hqg4k\") pod \"dnsmasq-dns-7d84b4d45c-gsjfm\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:18 crc kubenswrapper[4853]: I1209 17:25:18.429620 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:19 crc kubenswrapper[4853]: I1209 17:25:19.025262 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gsjfm"] Dec 09 17:25:19 crc kubenswrapper[4853]: I1209 17:25:19.460937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" event={"ID":"adfce42f-8efb-4ccb-b49f-81624a6963de","Type":"ContainerStarted","Data":"4c2a561035c1fa14441757b8c197280210fdb2a14a38703e2b166f302c533a93"} Dec 09 17:25:19 crc kubenswrapper[4853]: I1209 17:25:19.465148 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe91677e-e106-4624-a45e-45111c868559","Type":"ContainerStarted","Data":"924100489e5218580a84fdc95fb9e5b48bbe1c99e9977834a0be2cd442f6199b"} Dec 09 17:25:20 crc kubenswrapper[4853]: I1209 17:25:20.487052 4853 generic.go:334] "Generic (PLEG): container finished" podID="adfce42f-8efb-4ccb-b49f-81624a6963de" containerID="de8a7d229a4639b95a5f0000d28aaa82f9207711b8e54cacb92a73d4815e752b" exitCode=0 Dec 09 17:25:20 crc kubenswrapper[4853]: I1209 17:25:20.487140 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" event={"ID":"adfce42f-8efb-4ccb-b49f-81624a6963de","Type":"ContainerDied","Data":"de8a7d229a4639b95a5f0000d28aaa82f9207711b8e54cacb92a73d4815e752b"} Dec 09 17:25:20 crc kubenswrapper[4853]: I1209 17:25:20.491070 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce495e5-4db9-457d-a5c9-eb39308cbcd2","Type":"ContainerStarted","Data":"be7a40059b47e642453463f2d02c25e3a0e2dae70dee72890508c9e9d7edf8a7"} Dec 09 17:25:21 crc kubenswrapper[4853]: I1209 17:25:21.503162 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" event={"ID":"adfce42f-8efb-4ccb-b49f-81624a6963de","Type":"ContainerStarted","Data":"12ee27b5cf3e933f7b7d978df64ec9968b0b64432080cebbacd81640db1b0dca"} Dec 09 17:25:21 crc kubenswrapper[4853]: I1209 17:25:21.532104 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" podStartSLOduration=4.532080795 podStartE2EDuration="4.532080795s" podCreationTimestamp="2025-12-09 17:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:25:21.531689955 +0000 UTC m=+1748.466429157" watchObservedRunningTime="2025-12-09 17:25:21.532080795 +0000 UTC m=+1748.466819987" Dec 09 17:25:21 crc kubenswrapper[4853]: I1209 17:25:21.568163 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:25:21 crc kubenswrapper[4853]: E1209 17:25:21.568569 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:25:22 crc kubenswrapper[4853]: I1209 17:25:22.518833 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:25 crc kubenswrapper[4853]: I1209 17:25:25.016195 4853 scope.go:117] "RemoveContainer" containerID="0ec2681324ff689ab5bb7881b27a68c39ed363f8ab572cf89f63a9821f6c036a" Dec 09 17:25:25 crc kubenswrapper[4853]: I1209 17:25:25.047826 4853 scope.go:117] "RemoveContainer" containerID="3e3ca36931c6df587306622402f9b2f757ee55ac3f63e55574d067bcd542d863" Dec 09 17:25:25 crc kubenswrapper[4853]: I1209 17:25:25.621734 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 17:25:25 crc kubenswrapper[4853]: E1209 17:25:25.661721 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:25:25 crc kubenswrapper[4853]: E1209 17:25:25.661779 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:25:25 crc kubenswrapper[4853]: E1209 17:25:25.661930 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:25:25 crc kubenswrapper[4853]: E1209 17:25:25.663130 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:25:26 crc kubenswrapper[4853]: E1209 17:25:26.586408 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.431242 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.535900 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-47wqg"] Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.536149 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" podUID="9cea25af-f35d-42ec-accb-ef519f796dc8" containerName="dnsmasq-dns" containerID="cri-o://6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa" gracePeriod=10 Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.708503 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-q2qlg"] Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.716919 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.758631 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-q2qlg"] Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.863517 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gpbg\" (UniqueName: \"kubernetes.io/projected/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-kube-api-access-9gpbg\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.863645 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.863680 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-config\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.863698 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.863739 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.863776 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.863805 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.981163 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.981433 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-config\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.981458 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.982088 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.982531 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-config\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.982629 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.983263 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.984541 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.984706 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.984772 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.984951 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gpbg\" (UniqueName: \"kubernetes.io/projected/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-kube-api-access-9gpbg\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.985940 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:28 crc kubenswrapper[4853]: I1209 17:25:28.986676 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.008421 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gpbg\" (UniqueName: \"kubernetes.io/projected/a2d602a5-68a3-4b5a-825b-3313e3e85c0e-kube-api-access-9gpbg\") pod \"dnsmasq-dns-6f6df4f56c-q2qlg\" (UID: \"a2d602a5-68a3-4b5a-825b-3313e3e85c0e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.055491 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.281077 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.395411 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-swift-storage-0\") pod \"9cea25af-f35d-42ec-accb-ef519f796dc8\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.396357 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-nb\") pod \"9cea25af-f35d-42ec-accb-ef519f796dc8\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.396389 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhwfs\" (UniqueName: \"kubernetes.io/projected/9cea25af-f35d-42ec-accb-ef519f796dc8-kube-api-access-jhwfs\") pod \"9cea25af-f35d-42ec-accb-ef519f796dc8\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.397000 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-svc\") pod \"9cea25af-f35d-42ec-accb-ef519f796dc8\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.397055 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-config\") pod \"9cea25af-f35d-42ec-accb-ef519f796dc8\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.397224 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-sb\") pod \"9cea25af-f35d-42ec-accb-ef519f796dc8\" (UID: \"9cea25af-f35d-42ec-accb-ef519f796dc8\") " Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.403391 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cea25af-f35d-42ec-accb-ef519f796dc8-kube-api-access-jhwfs" (OuterVolumeSpecName: "kube-api-access-jhwfs") pod "9cea25af-f35d-42ec-accb-ef519f796dc8" (UID: "9cea25af-f35d-42ec-accb-ef519f796dc8"). InnerVolumeSpecName "kube-api-access-jhwfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.490317 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9cea25af-f35d-42ec-accb-ef519f796dc8" (UID: "9cea25af-f35d-42ec-accb-ef519f796dc8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.494134 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-config" (OuterVolumeSpecName: "config") pod "9cea25af-f35d-42ec-accb-ef519f796dc8" (UID: "9cea25af-f35d-42ec-accb-ef519f796dc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.495489 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9cea25af-f35d-42ec-accb-ef519f796dc8" (UID: "9cea25af-f35d-42ec-accb-ef519f796dc8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.500775 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.500834 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhwfs\" (UniqueName: \"kubernetes.io/projected/9cea25af-f35d-42ec-accb-ef519f796dc8-kube-api-access-jhwfs\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.500899 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.500911 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.514370 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9cea25af-f35d-42ec-accb-ef519f796dc8" (UID: "9cea25af-f35d-42ec-accb-ef519f796dc8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.530212 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9cea25af-f35d-42ec-accb-ef519f796dc8" (UID: "9cea25af-f35d-42ec-accb-ef519f796dc8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:29 crc kubenswrapper[4853]: E1209 17:25:29.573897 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.603336 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.603360 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cea25af-f35d-42ec-accb-ef519f796dc8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.619550 4853 generic.go:334] "Generic (PLEG): container finished" podID="9cea25af-f35d-42ec-accb-ef519f796dc8" containerID="6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa" exitCode=0 Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.619617 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" event={"ID":"9cea25af-f35d-42ec-accb-ef519f796dc8","Type":"ContainerDied","Data":"6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa"} Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.619624 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.619655 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-47wqg" event={"ID":"9cea25af-f35d-42ec-accb-ef519f796dc8","Type":"ContainerDied","Data":"fee9b909063a2c67c41013e3a013a014a1f49e556c40eac89d6dea90588085f7"} Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.619723 4853 scope.go:117] "RemoveContainer" containerID="6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.654544 4853 scope.go:117] "RemoveContainer" containerID="c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.655725 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-47wqg"] Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.677346 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-47wqg"] Dec 09 17:25:29 crc kubenswrapper[4853]: W1209 17:25:29.696966 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d602a5_68a3_4b5a_825b_3313e3e85c0e.slice/crio-6115e6bacaf6c6e9ba4b03528ef2fe288a45bf72212192bacb687f9dfe91fe70 WatchSource:0}: Error finding container 6115e6bacaf6c6e9ba4b03528ef2fe288a45bf72212192bacb687f9dfe91fe70: Status 404 returned error can't find the container with id 6115e6bacaf6c6e9ba4b03528ef2fe288a45bf72212192bacb687f9dfe91fe70 Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.726727 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-q2qlg"] Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.868415 4853 scope.go:117] "RemoveContainer" containerID="6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa" Dec 09 17:25:29 crc kubenswrapper[4853]: E1209 17:25:29.869429 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa\": container with ID starting with 6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa not found: ID does not exist" containerID="6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.869475 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa"} err="failed to get container status \"6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa\": rpc error: code = NotFound desc = could not find container \"6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa\": container with ID starting with 6a8cb921a32ebb02ae980b000d224484fc07150403dfc8296cc87710e11b25aa not found: ID does not exist" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.869504 4853 scope.go:117] "RemoveContainer" containerID="c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5" Dec 09 17:25:29 crc kubenswrapper[4853]: E1209 17:25:29.869921 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5\": container with ID starting with c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5 not found: ID does not exist" containerID="c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5" Dec 09 17:25:29 crc kubenswrapper[4853]: I1209 17:25:29.869965 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5"} err="failed to get container status \"c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5\": rpc error: code = NotFound desc = could not find container \"c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5\": container with ID starting with c55f62877ea2db7c07571258c5e862bc35cc183c3c03d27f48091226fff7efd5 not found: ID does not exist" Dec 09 17:25:30 crc kubenswrapper[4853]: I1209 17:25:30.643362 4853 generic.go:334] "Generic (PLEG): container finished" podID="a2d602a5-68a3-4b5a-825b-3313e3e85c0e" containerID="28b2e46a9e4ceb33ed85a5d8412763606576e4b78f7314e1fa2d8c5b4490fc97" exitCode=0 Dec 09 17:25:30 crc kubenswrapper[4853]: I1209 17:25:30.643655 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" event={"ID":"a2d602a5-68a3-4b5a-825b-3313e3e85c0e","Type":"ContainerDied","Data":"28b2e46a9e4ceb33ed85a5d8412763606576e4b78f7314e1fa2d8c5b4490fc97"} Dec 09 17:25:30 crc kubenswrapper[4853]: I1209 17:25:30.643681 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" event={"ID":"a2d602a5-68a3-4b5a-825b-3313e3e85c0e","Type":"ContainerStarted","Data":"6115e6bacaf6c6e9ba4b03528ef2fe288a45bf72212192bacb687f9dfe91fe70"} Dec 09 17:25:31 crc kubenswrapper[4853]: I1209 17:25:31.587891 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cea25af-f35d-42ec-accb-ef519f796dc8" path="/var/lib/kubelet/pods/9cea25af-f35d-42ec-accb-ef519f796dc8/volumes" Dec 09 17:25:31 crc kubenswrapper[4853]: I1209 17:25:31.678110 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" event={"ID":"a2d602a5-68a3-4b5a-825b-3313e3e85c0e","Type":"ContainerStarted","Data":"0be6e60cd637eb0dabae0e18e718105096995e7e8c59eb9f59289f79020d87bb"} Dec 09 17:25:31 crc kubenswrapper[4853]: I1209 17:25:31.678279 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:31 crc kubenswrapper[4853]: I1209 17:25:31.703009 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" podStartSLOduration=3.702983509 podStartE2EDuration="3.702983509s" podCreationTimestamp="2025-12-09 17:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:25:31.697537926 +0000 UTC m=+1758.632277118" watchObservedRunningTime="2025-12-09 17:25:31.702983509 +0000 UTC m=+1758.637722691" Dec 09 17:25:35 crc kubenswrapper[4853]: I1209 17:25:35.567374 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:25:35 crc kubenswrapper[4853]: E1209 17:25:35.568184 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.057815 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-q2qlg" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.160153 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gsjfm"] Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.161205 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" podUID="adfce42f-8efb-4ccb-b49f-81624a6963de" containerName="dnsmasq-dns" containerID="cri-o://12ee27b5cf3e933f7b7d978df64ec9968b0b64432080cebbacd81640db1b0dca" gracePeriod=10 Dec 09 17:25:39 crc kubenswrapper[4853]: E1209 17:25:39.573533 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.766506 4853 generic.go:334] "Generic (PLEG): container finished" podID="adfce42f-8efb-4ccb-b49f-81624a6963de" containerID="12ee27b5cf3e933f7b7d978df64ec9968b0b64432080cebbacd81640db1b0dca" exitCode=0 Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.766553 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" event={"ID":"adfce42f-8efb-4ccb-b49f-81624a6963de","Type":"ContainerDied","Data":"12ee27b5cf3e933f7b7d978df64ec9968b0b64432080cebbacd81640db1b0dca"} Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.766798 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" event={"ID":"adfce42f-8efb-4ccb-b49f-81624a6963de","Type":"ContainerDied","Data":"4c2a561035c1fa14441757b8c197280210fdb2a14a38703e2b166f302c533a93"} Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.766813 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c2a561035c1fa14441757b8c197280210fdb2a14a38703e2b166f302c533a93" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.784957 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.800046 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-nb\") pod \"adfce42f-8efb-4ccb-b49f-81624a6963de\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.800114 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqg4k\" (UniqueName: \"kubernetes.io/projected/adfce42f-8efb-4ccb-b49f-81624a6963de-kube-api-access-hqg4k\") pod \"adfce42f-8efb-4ccb-b49f-81624a6963de\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.800348 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-sb\") pod \"adfce42f-8efb-4ccb-b49f-81624a6963de\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.800408 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-swift-storage-0\") pod \"adfce42f-8efb-4ccb-b49f-81624a6963de\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.800428 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-openstack-edpm-ipam\") pod \"adfce42f-8efb-4ccb-b49f-81624a6963de\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.800566 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-svc\") pod \"adfce42f-8efb-4ccb-b49f-81624a6963de\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.800664 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-config\") pod \"adfce42f-8efb-4ccb-b49f-81624a6963de\" (UID: \"adfce42f-8efb-4ccb-b49f-81624a6963de\") " Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.825915 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfce42f-8efb-4ccb-b49f-81624a6963de-kube-api-access-hqg4k" (OuterVolumeSpecName: "kube-api-access-hqg4k") pod "adfce42f-8efb-4ccb-b49f-81624a6963de" (UID: "adfce42f-8efb-4ccb-b49f-81624a6963de"). InnerVolumeSpecName "kube-api-access-hqg4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.879395 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-config" (OuterVolumeSpecName: "config") pod "adfce42f-8efb-4ccb-b49f-81624a6963de" (UID: "adfce42f-8efb-4ccb-b49f-81624a6963de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.884336 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "adfce42f-8efb-4ccb-b49f-81624a6963de" (UID: "adfce42f-8efb-4ccb-b49f-81624a6963de"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.897332 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "adfce42f-8efb-4ccb-b49f-81624a6963de" (UID: "adfce42f-8efb-4ccb-b49f-81624a6963de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.904180 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqg4k\" (UniqueName: \"kubernetes.io/projected/adfce42f-8efb-4ccb-b49f-81624a6963de-kube-api-access-hqg4k\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.904251 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.904264 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.904276 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-config\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.917628 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "adfce42f-8efb-4ccb-b49f-81624a6963de" (UID: "adfce42f-8efb-4ccb-b49f-81624a6963de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.923188 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "adfce42f-8efb-4ccb-b49f-81624a6963de" (UID: "adfce42f-8efb-4ccb-b49f-81624a6963de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:39 crc kubenswrapper[4853]: I1209 17:25:39.932220 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adfce42f-8efb-4ccb-b49f-81624a6963de" (UID: "adfce42f-8efb-4ccb-b49f-81624a6963de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:25:40 crc kubenswrapper[4853]: I1209 17:25:40.005779 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:40 crc kubenswrapper[4853]: I1209 17:25:40.005812 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:40 crc kubenswrapper[4853]: I1209 17:25:40.005821 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfce42f-8efb-4ccb-b49f-81624a6963de-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 17:25:40 crc kubenswrapper[4853]: I1209 17:25:40.777569 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-gsjfm" Dec 09 17:25:40 crc kubenswrapper[4853]: I1209 17:25:40.820470 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gsjfm"] Dec 09 17:25:40 crc kubenswrapper[4853]: I1209 17:25:40.838195 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gsjfm"] Dec 09 17:25:41 crc kubenswrapper[4853]: I1209 17:25:41.579568 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfce42f-8efb-4ccb-b49f-81624a6963de" path="/var/lib/kubelet/pods/adfce42f-8efb-4ccb-b49f-81624a6963de/volumes" Dec 09 17:25:43 crc kubenswrapper[4853]: E1209 17:25:43.667824 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:25:43 crc kubenswrapper[4853]: E1209 17:25:43.668186 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:25:43 crc kubenswrapper[4853]: E1209 17:25:43.668361 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:25:43 crc kubenswrapper[4853]: E1209 17:25:43.669825 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:25:50 crc kubenswrapper[4853]: I1209 17:25:50.567775 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:25:50 crc kubenswrapper[4853]: E1209 17:25:50.568574 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:25:50 crc kubenswrapper[4853]: E1209 17:25:50.700339 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:25:50 crc kubenswrapper[4853]: E1209 17:25:50.700399 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:25:50 crc kubenswrapper[4853]: E1209 17:25:50.700532 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:25:50 crc kubenswrapper[4853]: E1209 17:25:50.701906 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:25:52 crc kubenswrapper[4853]: I1209 17:25:51.999536 4853 generic.go:334] "Generic (PLEG): container finished" podID="2ce495e5-4db9-457d-a5c9-eb39308cbcd2" containerID="be7a40059b47e642453463f2d02c25e3a0e2dae70dee72890508c9e9d7edf8a7" exitCode=0 Dec 09 17:25:52 crc kubenswrapper[4853]: I1209 17:25:51.999649 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce495e5-4db9-457d-a5c9-eb39308cbcd2","Type":"ContainerDied","Data":"be7a40059b47e642453463f2d02c25e3a0e2dae70dee72890508c9e9d7edf8a7"} Dec 09 17:25:52 crc kubenswrapper[4853]: I1209 17:25:52.003191 4853 generic.go:334] "Generic (PLEG): container finished" podID="fe91677e-e106-4624-a45e-45111c868559" containerID="924100489e5218580a84fdc95fb9e5b48bbe1c99e9977834a0be2cd442f6199b" exitCode=0 Dec 09 17:25:52 crc kubenswrapper[4853]: I1209 17:25:52.003224 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe91677e-e106-4624-a45e-45111c868559","Type":"ContainerDied","Data":"924100489e5218580a84fdc95fb9e5b48bbe1c99e9977834a0be2cd442f6199b"} Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.017567 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce495e5-4db9-457d-a5c9-eb39308cbcd2","Type":"ContainerStarted","Data":"e8444ad5e131ec88c531f40b3c703a196728b6fd19794a81d9a9ec01bf6bf7b5"} Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.018561 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.022873 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fe91677e-e106-4624-a45e-45111c868559","Type":"ContainerStarted","Data":"90059953ab8124b7f229c9b270b738b69bf7c8d558896a442f16c130d922341d"} Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.023328 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.051723 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.051695092 podStartE2EDuration="37.051695092s" podCreationTimestamp="2025-12-09 17:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:25:53.044717908 +0000 UTC m=+1779.979457090" watchObservedRunningTime="2025-12-09 17:25:53.051695092 +0000 UTC m=+1779.986434274" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.073853 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.073810994 podStartE2EDuration="38.073810994s" podCreationTimestamp="2025-12-09 17:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:25:53.069524031 +0000 UTC m=+1780.004263213" watchObservedRunningTime="2025-12-09 17:25:53.073810994 +0000 UTC m=+1780.008550176" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.661658 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg"] Dec 09 17:25:53 crc kubenswrapper[4853]: E1209 17:25:53.662657 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfce42f-8efb-4ccb-b49f-81624a6963de" containerName="init" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.662681 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfce42f-8efb-4ccb-b49f-81624a6963de" containerName="init" Dec 09 17:25:53 crc kubenswrapper[4853]: E1209 17:25:53.662718 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cea25af-f35d-42ec-accb-ef519f796dc8" containerName="init" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.662727 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cea25af-f35d-42ec-accb-ef519f796dc8" containerName="init" Dec 09 17:25:53 crc kubenswrapper[4853]: E1209 17:25:53.662762 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfce42f-8efb-4ccb-b49f-81624a6963de" containerName="dnsmasq-dns" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.662770 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfce42f-8efb-4ccb-b49f-81624a6963de" containerName="dnsmasq-dns" Dec 09 17:25:53 crc kubenswrapper[4853]: E1209 17:25:53.662789 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cea25af-f35d-42ec-accb-ef519f796dc8" containerName="dnsmasq-dns" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.662797 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cea25af-f35d-42ec-accb-ef519f796dc8" containerName="dnsmasq-dns" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.663102 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfce42f-8efb-4ccb-b49f-81624a6963de" containerName="dnsmasq-dns" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.663154 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cea25af-f35d-42ec-accb-ef519f796dc8" containerName="dnsmasq-dns" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.664281 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.668144 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.668280 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.668393 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.673787 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.677572 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg"] Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.771817 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.771969 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.771996 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7zzr\" (UniqueName: \"kubernetes.io/projected/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-kube-api-access-q7zzr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.772034 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.874527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.874694 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.874723 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7zzr\" (UniqueName: \"kubernetes.io/projected/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-kube-api-access-q7zzr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.874762 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.880197 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.881021 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.886865 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.896935 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7zzr\" (UniqueName: \"kubernetes.io/projected/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-kube-api-access-q7zzr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:53 crc kubenswrapper[4853]: I1209 17:25:53.987979 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:25:54 crc kubenswrapper[4853]: I1209 17:25:54.658335 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg"] Dec 09 17:25:55 crc kubenswrapper[4853]: I1209 17:25:55.045157 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" event={"ID":"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433","Type":"ContainerStarted","Data":"6b8d6d197f1fbb07b5740d29eb31b40b82efc51e57c0f7edb241586c7b7372a9"} Dec 09 17:25:55 crc kubenswrapper[4853]: E1209 17:25:55.569461 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:26:02 crc kubenswrapper[4853]: E1209 17:26:02.568907 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:26:04 crc kubenswrapper[4853]: I1209 17:26:04.567214 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:26:04 crc kubenswrapper[4853]: E1209 17:26:04.567809 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:26:06 crc kubenswrapper[4853]: I1209 17:26:06.234312 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" event={"ID":"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433","Type":"ContainerStarted","Data":"2b0714279a91d73f1ca99476657741fe008aa28d5a4c457c59ad24401138325e"} Dec 09 17:26:06 crc kubenswrapper[4853]: I1209 17:26:06.262854 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" podStartSLOduration=2.291501703 podStartE2EDuration="13.262833446s" podCreationTimestamp="2025-12-09 17:25:53 +0000 UTC" firstStartedPulling="2025-12-09 17:25:54.700708083 +0000 UTC m=+1781.635447265" lastFinishedPulling="2025-12-09 17:26:05.672039826 +0000 UTC m=+1792.606779008" observedRunningTime="2025-12-09 17:26:06.251210679 +0000 UTC m=+1793.185949871" watchObservedRunningTime="2025-12-09 17:26:06.262833446 +0000 UTC m=+1793.197572628" Dec 09 17:26:06 crc kubenswrapper[4853]: I1209 17:26:06.325764 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 17:26:06 crc kubenswrapper[4853]: E1209 17:26:06.569882 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:26:06 crc kubenswrapper[4853]: I1209 17:26:06.792902 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 17:26:13 crc kubenswrapper[4853]: E1209 17:26:13.579673 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:26:17 crc kubenswrapper[4853]: I1209 17:26:17.376774 4853 generic.go:334] "Generic (PLEG): container finished" podID="c8f54559-8d6b-42e6-b5f3-db4c8b6ed433" containerID="2b0714279a91d73f1ca99476657741fe008aa28d5a4c457c59ad24401138325e" exitCode=0 Dec 09 17:26:17 crc kubenswrapper[4853]: I1209 17:26:17.376956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" event={"ID":"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433","Type":"ContainerDied","Data":"2b0714279a91d73f1ca99476657741fe008aa28d5a4c457c59ad24401138325e"} Dec 09 17:26:18 crc kubenswrapper[4853]: I1209 17:26:18.568904 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:26:18 crc kubenswrapper[4853]: E1209 17:26:18.569434 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:26:18 crc kubenswrapper[4853]: E1209 17:26:18.569509 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:26:18 crc kubenswrapper[4853]: I1209 17:26:18.990534 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.062795 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-repo-setup-combined-ca-bundle\") pod \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.062850 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7zzr\" (UniqueName: \"kubernetes.io/projected/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-kube-api-access-q7zzr\") pod \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.062912 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-inventory\") pod \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.063020 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-ssh-key\") pod \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\" (UID: \"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433\") " Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.078677 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c8f54559-8d6b-42e6-b5f3-db4c8b6ed433" (UID: "c8f54559-8d6b-42e6-b5f3-db4c8b6ed433"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.092613 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-kube-api-access-q7zzr" (OuterVolumeSpecName: "kube-api-access-q7zzr") pod "c8f54559-8d6b-42e6-b5f3-db4c8b6ed433" (UID: "c8f54559-8d6b-42e6-b5f3-db4c8b6ed433"). InnerVolumeSpecName "kube-api-access-q7zzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.131968 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c8f54559-8d6b-42e6-b5f3-db4c8b6ed433" (UID: "c8f54559-8d6b-42e6-b5f3-db4c8b6ed433"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.133754 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-inventory" (OuterVolumeSpecName: "inventory") pod "c8f54559-8d6b-42e6-b5f3-db4c8b6ed433" (UID: "c8f54559-8d6b-42e6-b5f3-db4c8b6ed433"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.165404 4853 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.165445 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7zzr\" (UniqueName: \"kubernetes.io/projected/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-kube-api-access-q7zzr\") on node \"crc\" DevicePath \"\"" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.165459 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.165471 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8f54559-8d6b-42e6-b5f3-db4c8b6ed433-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.408139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" event={"ID":"c8f54559-8d6b-42e6-b5f3-db4c8b6ed433","Type":"ContainerDied","Data":"6b8d6d197f1fbb07b5740d29eb31b40b82efc51e57c0f7edb241586c7b7372a9"} Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.408185 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b8d6d197f1fbb07b5740d29eb31b40b82efc51e57c0f7edb241586c7b7372a9" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.408248 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.509820 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p"] Dec 09 17:26:19 crc kubenswrapper[4853]: E1209 17:26:19.510575 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f54559-8d6b-42e6-b5f3-db4c8b6ed433" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.510624 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f54559-8d6b-42e6-b5f3-db4c8b6ed433" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.511099 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f54559-8d6b-42e6-b5f3-db4c8b6ed433" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.512449 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.525650 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p"] Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.555244 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.555427 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.555748 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.555756 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.575427 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4hh8\" (UniqueName: \"kubernetes.io/projected/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-kube-api-access-k4hh8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crt7p\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.575489 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crt7p\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.575648 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crt7p\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.678367 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crt7p\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.679757 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4hh8\" (UniqueName: \"kubernetes.io/projected/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-kube-api-access-k4hh8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crt7p\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.679932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crt7p\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.684217 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crt7p\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.684758 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crt7p\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.699687 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4hh8\" (UniqueName: \"kubernetes.io/projected/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-kube-api-access-k4hh8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crt7p\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:19 crc kubenswrapper[4853]: I1209 17:26:19.871363 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:20 crc kubenswrapper[4853]: I1209 17:26:20.497267 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p"] Dec 09 17:26:21 crc kubenswrapper[4853]: I1209 17:26:21.440071 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" event={"ID":"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9","Type":"ContainerStarted","Data":"3a4460e0878fd4785c097c127b4a9c516daa98bd9c6c956dcdc42bddbd32acfd"} Dec 09 17:26:21 crc kubenswrapper[4853]: I1209 17:26:21.440320 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" event={"ID":"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9","Type":"ContainerStarted","Data":"e9b40dede879156ee3e829938f155425fc21eb961f0d6fe55ab7f19d6ee93277"} Dec 09 17:26:24 crc kubenswrapper[4853]: I1209 17:26:24.477654 4853 generic.go:334] "Generic (PLEG): container finished" podID="da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9" containerID="3a4460e0878fd4785c097c127b4a9c516daa98bd9c6c956dcdc42bddbd32acfd" exitCode=0 Dec 09 17:26:24 crc kubenswrapper[4853]: I1209 17:26:24.477729 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" event={"ID":"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9","Type":"ContainerDied","Data":"3a4460e0878fd4785c097c127b4a9c516daa98bd9c6c956dcdc42bddbd32acfd"} Dec 09 17:26:25 crc kubenswrapper[4853]: I1209 17:26:25.210497 4853 scope.go:117] "RemoveContainer" containerID="3aa945de9f40ff01bea42457b7153a08d0589b5c3a487bd27c8124f884a6b089" Dec 09 17:26:25 crc kubenswrapper[4853]: I1209 17:26:25.269041 4853 scope.go:117] "RemoveContainer" containerID="34d351e1137aebe3693894cce3ec63de6298d7392d97324cd32e5a56305557f3" Dec 09 17:26:25 crc kubenswrapper[4853]: I1209 17:26:25.362513 4853 scope.go:117] "RemoveContainer" containerID="711b874403c75309801464cd3585bf230e15aa077c97fbb69a5cfa6b4b0e48b7" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.002339 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.152775 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4hh8\" (UniqueName: \"kubernetes.io/projected/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-kube-api-access-k4hh8\") pod \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.152898 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-inventory\") pod \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.153094 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-ssh-key\") pod \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\" (UID: \"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9\") " Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.158769 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-kube-api-access-k4hh8" (OuterVolumeSpecName: "kube-api-access-k4hh8") pod "da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9" (UID: "da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9"). InnerVolumeSpecName "kube-api-access-k4hh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.189303 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-inventory" (OuterVolumeSpecName: "inventory") pod "da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9" (UID: "da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.208915 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9" (UID: "da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.255870 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4hh8\" (UniqueName: \"kubernetes.io/projected/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-kube-api-access-k4hh8\") on node \"crc\" DevicePath \"\"" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.255911 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.255922 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.515471 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" event={"ID":"da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9","Type":"ContainerDied","Data":"e9b40dede879156ee3e829938f155425fc21eb961f0d6fe55ab7f19d6ee93277"} Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.515527 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9b40dede879156ee3e829938f155425fc21eb961f0d6fe55ab7f19d6ee93277" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.515651 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crt7p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.610974 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p"] Dec 09 17:26:26 crc kubenswrapper[4853]: E1209 17:26:26.611708 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.611809 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.612153 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.613118 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.617058 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.617071 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.617062 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.617884 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.628822 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p"] Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.769726 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.769790 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.770283 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.770711 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm87x\" (UniqueName: \"kubernetes.io/projected/a9d8c808-933c-4c72-b6a2-8fd9371629ef-kube-api-access-rm87x\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.873400 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.873543 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm87x\" (UniqueName: \"kubernetes.io/projected/a9d8c808-933c-4c72-b6a2-8fd9371629ef-kube-api-access-rm87x\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.873579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.873622 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.876561 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.877212 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.877287 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.897369 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm87x\" (UniqueName: \"kubernetes.io/projected/a9d8c808-933c-4c72-b6a2-8fd9371629ef-kube-api-access-rm87x\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:26 crc kubenswrapper[4853]: I1209 17:26:26.969355 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:26:27 crc kubenswrapper[4853]: I1209 17:26:27.582401 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p"] Dec 09 17:26:28 crc kubenswrapper[4853]: I1209 17:26:28.551872 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" event={"ID":"a9d8c808-933c-4c72-b6a2-8fd9371629ef","Type":"ContainerStarted","Data":"6d7a09429800196269b50120dab4eb3df8c95645804fef59f905174355592411"} Dec 09 17:26:28 crc kubenswrapper[4853]: I1209 17:26:28.552163 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" event={"ID":"a9d8c808-933c-4c72-b6a2-8fd9371629ef","Type":"ContainerStarted","Data":"7a840f63a7441ac881f6f10ffac2b58ef92eb55422cf7408cf67df535724f78c"} Dec 09 17:26:28 crc kubenswrapper[4853]: E1209 17:26:28.569689 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:26:28 crc kubenswrapper[4853]: I1209 17:26:28.605778 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" podStartSLOduration=2.165629541 podStartE2EDuration="2.605755235s" podCreationTimestamp="2025-12-09 17:26:26 +0000 UTC" firstStartedPulling="2025-12-09 17:26:27.566676906 +0000 UTC m=+1814.501416098" lastFinishedPulling="2025-12-09 17:26:28.00680262 +0000 UTC m=+1814.941541792" observedRunningTime="2025-12-09 17:26:28.576904476 +0000 UTC m=+1815.511643668" watchObservedRunningTime="2025-12-09 17:26:28.605755235 +0000 UTC m=+1815.540494417" Dec 09 17:26:30 crc kubenswrapper[4853]: I1209 17:26:30.568241 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:26:30 crc kubenswrapper[4853]: E1209 17:26:30.569216 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:26:32 crc kubenswrapper[4853]: E1209 17:26:32.699934 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:26:32 crc kubenswrapper[4853]: E1209 17:26:32.700277 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:26:32 crc kubenswrapper[4853]: E1209 17:26:32.700447 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:26:32 crc kubenswrapper[4853]: E1209 17:26:32.702096 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:26:40 crc kubenswrapper[4853]: E1209 17:26:40.694776 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:26:40 crc kubenswrapper[4853]: E1209 17:26:40.695287 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:26:40 crc kubenswrapper[4853]: E1209 17:26:40.695436 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:26:40 crc kubenswrapper[4853]: E1209 17:26:40.696657 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:26:41 crc kubenswrapper[4853]: I1209 17:26:41.567838 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:26:41 crc kubenswrapper[4853]: E1209 17:26:41.568843 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:26:44 crc kubenswrapper[4853]: E1209 17:26:44.571977 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:26:54 crc kubenswrapper[4853]: E1209 17:26:54.573592 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:26:56 crc kubenswrapper[4853]: I1209 17:26:56.568958 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:26:56 crc kubenswrapper[4853]: E1209 17:26:56.570488 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:26:59 crc kubenswrapper[4853]: E1209 17:26:59.568941 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:27:05 crc kubenswrapper[4853]: E1209 17:27:05.571382 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:27:10 crc kubenswrapper[4853]: I1209 17:27:10.567888 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:27:10 crc kubenswrapper[4853]: E1209 17:27:10.568699 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:27:12 crc kubenswrapper[4853]: E1209 17:27:12.571305 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:27:20 crc kubenswrapper[4853]: E1209 17:27:20.570210 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:27:25 crc kubenswrapper[4853]: I1209 17:27:25.552427 4853 scope.go:117] "RemoveContainer" containerID="fad4c4849f5302e7fb69e01d4bb4dbb5c9e97b47d24a7966c5311073534a158f" Dec 09 17:27:25 crc kubenswrapper[4853]: I1209 17:27:25.567960 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:27:25 crc kubenswrapper[4853]: E1209 17:27:25.568404 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:27:25 crc kubenswrapper[4853]: I1209 17:27:25.583623 4853 scope.go:117] "RemoveContainer" containerID="e3c040a04b35d4f21410f73ef47c654dc2bc735c0618450d4640188fdcbaf697" Dec 09 17:27:25 crc kubenswrapper[4853]: I1209 17:27:25.672131 4853 scope.go:117] "RemoveContainer" containerID="061f55522b9662cff917876d7f9a1914135d687e2afc6fd1d9cf19ef694d8da9" Dec 09 17:27:25 crc kubenswrapper[4853]: I1209 17:27:25.699066 4853 scope.go:117] "RemoveContainer" containerID="39b40ecdfa5cbf99a4947871eed47955adca53cabd600ea5f42470db4f4db90d" Dec 09 17:27:25 crc kubenswrapper[4853]: I1209 17:27:25.744426 4853 scope.go:117] "RemoveContainer" containerID="e6b74223f78c0e0010bc7c951980521974c85cbbe6a804780cd558381f49be16" Dec 09 17:27:27 crc kubenswrapper[4853]: E1209 17:27:27.569480 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:27:32 crc kubenswrapper[4853]: E1209 17:27:32.583859 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:27:37 crc kubenswrapper[4853]: I1209 17:27:37.567949 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:27:37 crc kubenswrapper[4853]: E1209 17:27:37.569178 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:27:41 crc kubenswrapper[4853]: E1209 17:27:41.570563 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:27:46 crc kubenswrapper[4853]: E1209 17:27:46.584854 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:27:49 crc kubenswrapper[4853]: I1209 17:27:49.567280 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:27:49 crc kubenswrapper[4853]: E1209 17:27:49.568172 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:27:52 crc kubenswrapper[4853]: E1209 17:27:52.569391 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:28:01 crc kubenswrapper[4853]: I1209 17:28:01.568620 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:28:01 crc kubenswrapper[4853]: E1209 17:28:01.569450 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:28:01 crc kubenswrapper[4853]: E1209 17:28:01.692527 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:28:01 crc kubenswrapper[4853]: E1209 17:28:01.692651 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:28:01 crc kubenswrapper[4853]: E1209 17:28:01.692817 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:28:01 crc kubenswrapper[4853]: E1209 17:28:01.694047 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:28:04 crc kubenswrapper[4853]: E1209 17:28:04.688685 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:28:04 crc kubenswrapper[4853]: E1209 17:28:04.689182 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:28:04 crc kubenswrapper[4853]: E1209 17:28:04.689339 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:28:04 crc kubenswrapper[4853]: E1209 17:28:04.690469 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:28:15 crc kubenswrapper[4853]: I1209 17:28:15.568276 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:28:15 crc kubenswrapper[4853]: E1209 17:28:15.569027 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:28:15 crc kubenswrapper[4853]: E1209 17:28:15.571684 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:28:17 crc kubenswrapper[4853]: E1209 17:28:17.568148 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:28:25 crc kubenswrapper[4853]: I1209 17:28:25.885358 4853 scope.go:117] "RemoveContainer" containerID="e63625777ba013b969f53ed93d2854dcdaa1be857780e9a06154c646cc4cc96a" Dec 09 17:28:27 crc kubenswrapper[4853]: E1209 17:28:27.567963 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:28:28 crc kubenswrapper[4853]: E1209 17:28:28.570005 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:28:29 crc kubenswrapper[4853]: I1209 17:28:29.567286 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:28:29 crc kubenswrapper[4853]: E1209 17:28:29.568129 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:28:39 crc kubenswrapper[4853]: E1209 17:28:39.572675 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:28:41 crc kubenswrapper[4853]: I1209 17:28:41.567706 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:28:41 crc kubenswrapper[4853]: E1209 17:28:41.568502 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:28:42 crc kubenswrapper[4853]: E1209 17:28:42.571177 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:28:51 crc kubenswrapper[4853]: E1209 17:28:51.569806 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:28:53 crc kubenswrapper[4853]: I1209 17:28:53.576620 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:28:53 crc kubenswrapper[4853]: E1209 17:28:53.577131 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:28:57 crc kubenswrapper[4853]: E1209 17:28:57.570845 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:29:04 crc kubenswrapper[4853]: E1209 17:29:04.570124 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:29:06 crc kubenswrapper[4853]: I1209 17:29:06.568238 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:29:07 crc kubenswrapper[4853]: I1209 17:29:07.622633 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"00554452bc356075032339a30493a8db8eb8765a75d762f1a48b0ef033e8dfa3"} Dec 09 17:29:09 crc kubenswrapper[4853]: E1209 17:29:09.572235 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:29:19 crc kubenswrapper[4853]: E1209 17:29:19.570561 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:29:22 crc kubenswrapper[4853]: E1209 17:29:22.569851 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:29:25 crc kubenswrapper[4853]: I1209 17:29:25.970115 4853 scope.go:117] "RemoveContainer" containerID="c9d4e438b848cb51444667a59dcfb1d32e64ce9ef2f2a7bca3962c328f792b0b" Dec 09 17:29:26 crc kubenswrapper[4853]: I1209 17:29:26.012725 4853 scope.go:117] "RemoveContainer" containerID="23fd1afd0437cfc7a7f24b768805570a765bf54cc4ebed7e501aa9ee970c576a" Dec 09 17:29:26 crc kubenswrapper[4853]: I1209 17:29:26.046081 4853 scope.go:117] "RemoveContainer" containerID="cdf62787aea3b7548def2cd6bd2bb131b41fdf44d10e160b7df415f72a049c58" Dec 09 17:29:26 crc kubenswrapper[4853]: I1209 17:29:26.079069 4853 scope.go:117] "RemoveContainer" containerID="ca10fd94240a74555b9fd6cd6d00012213f533f7ee0947ec37d6b5504fc45894" Dec 09 17:29:26 crc kubenswrapper[4853]: I1209 17:29:26.114571 4853 scope.go:117] "RemoveContainer" containerID="be5ebf0517e96b6f6dfb692a07d0a5b8ecbc9d74745a19f3ea71329b1482a591" Dec 09 17:29:26 crc kubenswrapper[4853]: I1209 17:29:26.151789 4853 scope.go:117] "RemoveContainer" containerID="64bf07429d0ea4be42944bebc322ad19165e2632f662cdeea3c343ab136d86db" Dec 09 17:29:26 crc kubenswrapper[4853]: I1209 17:29:26.184721 4853 scope.go:117] "RemoveContainer" containerID="bf2689199ab5e6e2f1529ee37f6c2d6a879ac6f5f7040a8801874c1deacce110" Dec 09 17:29:32 crc kubenswrapper[4853]: I1209 17:29:32.075804 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-qlcff"] Dec 09 17:29:32 crc kubenswrapper[4853]: I1209 17:29:32.089077 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-7926-account-create-update-2xwpc"] Dec 09 17:29:32 crc kubenswrapper[4853]: I1209 17:29:32.099244 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-qlcff"] Dec 09 17:29:32 crc kubenswrapper[4853]: I1209 17:29:32.108830 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-7926-account-create-update-2xwpc"] Dec 09 17:29:33 crc kubenswrapper[4853]: I1209 17:29:33.585310 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60933e3f-3b3c-40ab-a960-188a3b30b2f1" path="/var/lib/kubelet/pods/60933e3f-3b3c-40ab-a960-188a3b30b2f1/volumes" Dec 09 17:29:33 crc kubenswrapper[4853]: I1209 17:29:33.586746 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe99053-f2b5-435d-838e-06a8c59652dc" path="/var/lib/kubelet/pods/cfe99053-f2b5-435d-838e-06a8c59652dc/volumes" Dec 09 17:29:34 crc kubenswrapper[4853]: E1209 17:29:34.572445 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:29:36 crc kubenswrapper[4853]: I1209 17:29:36.048849 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1a56-account-create-update-prr4z"] Dec 09 17:29:36 crc kubenswrapper[4853]: I1209 17:29:36.065849 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-slx9d"] Dec 09 17:29:36 crc kubenswrapper[4853]: I1209 17:29:36.076822 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1a56-account-create-update-prr4z"] Dec 09 17:29:36 crc kubenswrapper[4853]: I1209 17:29:36.099674 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-slx9d"] Dec 09 17:29:37 crc kubenswrapper[4853]: E1209 17:29:37.570137 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:29:37 crc kubenswrapper[4853]: I1209 17:29:37.589323 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="033e773c-398b-493a-92aa-464307b11906" path="/var/lib/kubelet/pods/033e773c-398b-493a-92aa-464307b11906/volumes" Dec 09 17:29:37 crc kubenswrapper[4853]: I1209 17:29:37.591445 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b36e250-d75c-41bf-a8a6-51c84ec06406" path="/var/lib/kubelet/pods/1b36e250-d75c-41bf-a8a6-51c84ec06406/volumes" Dec 09 17:29:38 crc kubenswrapper[4853]: I1209 17:29:38.953302 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t8w92"] Dec 09 17:29:38 crc kubenswrapper[4853]: I1209 17:29:38.958579 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:38 crc kubenswrapper[4853]: I1209 17:29:38.984411 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8w92"] Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.137721 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-catalog-content\") pod \"redhat-operators-t8w92\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.138103 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fdk8\" (UniqueName: \"kubernetes.io/projected/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-kube-api-access-2fdk8\") pod \"redhat-operators-t8w92\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.138277 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-utilities\") pod \"redhat-operators-t8w92\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.241514 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-catalog-content\") pod \"redhat-operators-t8w92\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.241559 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fdk8\" (UniqueName: \"kubernetes.io/projected/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-kube-api-access-2fdk8\") pod \"redhat-operators-t8w92\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.241667 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-utilities\") pod \"redhat-operators-t8w92\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.241934 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-catalog-content\") pod \"redhat-operators-t8w92\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.241994 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-utilities\") pod \"redhat-operators-t8w92\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.261567 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fdk8\" (UniqueName: \"kubernetes.io/projected/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-kube-api-access-2fdk8\") pod \"redhat-operators-t8w92\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.292882 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:39 crc kubenswrapper[4853]: I1209 17:29:39.815256 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8w92"] Dec 09 17:29:40 crc kubenswrapper[4853]: I1209 17:29:40.060151 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8w92" event={"ID":"fdc047b9-c02f-472f-9f46-bfb19a12cfb7","Type":"ContainerStarted","Data":"a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427"} Dec 09 17:29:40 crc kubenswrapper[4853]: I1209 17:29:40.060853 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8w92" event={"ID":"fdc047b9-c02f-472f-9f46-bfb19a12cfb7","Type":"ContainerStarted","Data":"587af48c56119c5665d343886f2769b05ef05a531d333ba7bc07687818125f1f"} Dec 09 17:29:41 crc kubenswrapper[4853]: I1209 17:29:41.076166 4853 generic.go:334] "Generic (PLEG): container finished" podID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerID="a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427" exitCode=0 Dec 09 17:29:41 crc kubenswrapper[4853]: I1209 17:29:41.076234 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8w92" event={"ID":"fdc047b9-c02f-472f-9f46-bfb19a12cfb7","Type":"ContainerDied","Data":"a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427"} Dec 09 17:29:42 crc kubenswrapper[4853]: I1209 17:29:42.091048 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8w92" event={"ID":"fdc047b9-c02f-472f-9f46-bfb19a12cfb7","Type":"ContainerStarted","Data":"91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087"} Dec 09 17:29:45 crc kubenswrapper[4853]: E1209 17:29:45.569859 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.046464 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6cff-account-create-update-nzq9s"] Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.060934 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-36df-account-create-update-qs6kt"] Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.075862 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6cff-account-create-update-nzq9s"] Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.089201 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-dv9px"] Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.102636 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-36df-account-create-update-qs6kt"] Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.116316 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-dv9px"] Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.133344 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nq7h2"] Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.142822 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nq7h2"] Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.159609 4853 generic.go:334] "Generic (PLEG): container finished" podID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerID="91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087" exitCode=0 Dec 09 17:29:46 crc kubenswrapper[4853]: I1209 17:29:46.159687 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8w92" event={"ID":"fdc047b9-c02f-472f-9f46-bfb19a12cfb7","Type":"ContainerDied","Data":"91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087"} Dec 09 17:29:47 crc kubenswrapper[4853]: I1209 17:29:47.175251 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8w92" event={"ID":"fdc047b9-c02f-472f-9f46-bfb19a12cfb7","Type":"ContainerStarted","Data":"bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee"} Dec 09 17:29:47 crc kubenswrapper[4853]: I1209 17:29:47.197025 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t8w92" podStartSLOduration=3.615282798 podStartE2EDuration="9.197000826s" podCreationTimestamp="2025-12-09 17:29:38 +0000 UTC" firstStartedPulling="2025-12-09 17:29:41.078782238 +0000 UTC m=+2008.013521450" lastFinishedPulling="2025-12-09 17:29:46.660500296 +0000 UTC m=+2013.595239478" observedRunningTime="2025-12-09 17:29:47.195907405 +0000 UTC m=+2014.130646587" watchObservedRunningTime="2025-12-09 17:29:47.197000826 +0000 UTC m=+2014.131740008" Dec 09 17:29:47 crc kubenswrapper[4853]: I1209 17:29:47.582058 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170cafdb-5ab6-47f8-ba66-0398e9ca3904" path="/var/lib/kubelet/pods/170cafdb-5ab6-47f8-ba66-0398e9ca3904/volumes" Dec 09 17:29:47 crc kubenswrapper[4853]: I1209 17:29:47.582924 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf06379-f14c-4652-9768-459276512e7f" path="/var/lib/kubelet/pods/8cf06379-f14c-4652-9768-459276512e7f/volumes" Dec 09 17:29:47 crc kubenswrapper[4853]: I1209 17:29:47.583567 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9117d61-707d-4f83-9a09-eaf1c26c1b11" path="/var/lib/kubelet/pods/b9117d61-707d-4f83-9a09-eaf1c26c1b11/volumes" Dec 09 17:29:47 crc kubenswrapper[4853]: I1209 17:29:47.584285 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d21c5479-b7f8-47f1-9503-ae75e212fe56" path="/var/lib/kubelet/pods/d21c5479-b7f8-47f1-9503-ae75e212fe56/volumes" Dec 09 17:29:49 crc kubenswrapper[4853]: I1209 17:29:49.293848 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:49 crc kubenswrapper[4853]: I1209 17:29:49.294488 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:29:50 crc kubenswrapper[4853]: I1209 17:29:50.036475 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jrd55"] Dec 09 17:29:50 crc kubenswrapper[4853]: I1209 17:29:50.054464 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-a2c0-account-create-update-l856k"] Dec 09 17:29:50 crc kubenswrapper[4853]: I1209 17:29:50.068716 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-a2c0-account-create-update-l856k"] Dec 09 17:29:50 crc kubenswrapper[4853]: I1209 17:29:50.080675 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-jrd55"] Dec 09 17:29:50 crc kubenswrapper[4853]: I1209 17:29:50.365654 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8w92" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="registry-server" probeResult="failure" output=< Dec 09 17:29:50 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 17:29:50 crc kubenswrapper[4853]: > Dec 09 17:29:50 crc kubenswrapper[4853]: E1209 17:29:50.569772 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:29:51 crc kubenswrapper[4853]: I1209 17:29:51.582091 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fab64e-a230-4cd5-a766-7d5668603181" path="/var/lib/kubelet/pods/38fab64e-a230-4cd5-a766-7d5668603181/volumes" Dec 09 17:29:51 crc kubenswrapper[4853]: I1209 17:29:51.583392 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2641ad-aacc-489a-b452-0751797303e0" path="/var/lib/kubelet/pods/ab2641ad-aacc-489a-b452-0751797303e0/volumes" Dec 09 17:29:54 crc kubenswrapper[4853]: I1209 17:29:54.253656 4853 generic.go:334] "Generic (PLEG): container finished" podID="a9d8c808-933c-4c72-b6a2-8fd9371629ef" containerID="6d7a09429800196269b50120dab4eb3df8c95645804fef59f905174355592411" exitCode=0 Dec 09 17:29:54 crc kubenswrapper[4853]: I1209 17:29:54.253853 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" event={"ID":"a9d8c808-933c-4c72-b6a2-8fd9371629ef","Type":"ContainerDied","Data":"6d7a09429800196269b50120dab4eb3df8c95645804fef59f905174355592411"} Dec 09 17:29:55 crc kubenswrapper[4853]: I1209 17:29:55.792896 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:29:55 crc kubenswrapper[4853]: I1209 17:29:55.902644 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-inventory\") pod \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " Dec 09 17:29:55 crc kubenswrapper[4853]: I1209 17:29:55.902725 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-bootstrap-combined-ca-bundle\") pod \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " Dec 09 17:29:55 crc kubenswrapper[4853]: I1209 17:29:55.902830 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm87x\" (UniqueName: \"kubernetes.io/projected/a9d8c808-933c-4c72-b6a2-8fd9371629ef-kube-api-access-rm87x\") pod \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " Dec 09 17:29:55 crc kubenswrapper[4853]: I1209 17:29:55.902898 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-ssh-key\") pod \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\" (UID: \"a9d8c808-933c-4c72-b6a2-8fd9371629ef\") " Dec 09 17:29:55 crc kubenswrapper[4853]: I1209 17:29:55.909040 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a9d8c808-933c-4c72-b6a2-8fd9371629ef" (UID: "a9d8c808-933c-4c72-b6a2-8fd9371629ef"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:29:55 crc kubenswrapper[4853]: I1209 17:29:55.909944 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d8c808-933c-4c72-b6a2-8fd9371629ef-kube-api-access-rm87x" (OuterVolumeSpecName: "kube-api-access-rm87x") pod "a9d8c808-933c-4c72-b6a2-8fd9371629ef" (UID: "a9d8c808-933c-4c72-b6a2-8fd9371629ef"). InnerVolumeSpecName "kube-api-access-rm87x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:29:55 crc kubenswrapper[4853]: I1209 17:29:55.943126 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a9d8c808-933c-4c72-b6a2-8fd9371629ef" (UID: "a9d8c808-933c-4c72-b6a2-8fd9371629ef"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:29:55 crc kubenswrapper[4853]: I1209 17:29:55.943506 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-inventory" (OuterVolumeSpecName: "inventory") pod "a9d8c808-933c-4c72-b6a2-8fd9371629ef" (UID: "a9d8c808-933c-4c72-b6a2-8fd9371629ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.011108 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.011163 4853 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.011176 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm87x\" (UniqueName: \"kubernetes.io/projected/a9d8c808-933c-4c72-b6a2-8fd9371629ef-kube-api-access-rm87x\") on node \"crc\" DevicePath \"\"" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.011186 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9d8c808-933c-4c72-b6a2-8fd9371629ef-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.284171 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" event={"ID":"a9d8c808-933c-4c72-b6a2-8fd9371629ef","Type":"ContainerDied","Data":"7a840f63a7441ac881f6f10ffac2b58ef92eb55422cf7408cf67df535724f78c"} Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.284244 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a840f63a7441ac881f6f10ffac2b58ef92eb55422cf7408cf67df535724f78c" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.284315 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.396540 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw"] Dec 09 17:29:56 crc kubenswrapper[4853]: E1209 17:29:56.398288 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d8c808-933c-4c72-b6a2-8fd9371629ef" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.398320 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d8c808-933c-4c72-b6a2-8fd9371629ef" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.398744 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d8c808-933c-4c72-b6a2-8fd9371629ef" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.399897 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.404876 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.404910 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.405174 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.405431 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.408068 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw"] Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.529864 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mchj\" (UniqueName: \"kubernetes.io/projected/7796c327-5952-4b15-a864-511d8f1c75d6-kube-api-access-8mchj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.529992 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.530027 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: E1209 17:29:56.586248 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d8c808_933c_4c72_b6a2_8fd9371629ef.slice/crio-7a840f63a7441ac881f6f10ffac2b58ef92eb55422cf7408cf67df535724f78c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d8c808_933c_4c72_b6a2_8fd9371629ef.slice\": RecentStats: unable to find data in memory cache]" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.632577 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mchj\" (UniqueName: \"kubernetes.io/projected/7796c327-5952-4b15-a864-511d8f1c75d6-kube-api-access-8mchj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.632747 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.632771 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.637249 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.637871 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.649575 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mchj\" (UniqueName: \"kubernetes.io/projected/7796c327-5952-4b15-a864-511d8f1c75d6-kube-api-access-8mchj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:56 crc kubenswrapper[4853]: I1209 17:29:56.717440 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:29:57 crc kubenswrapper[4853]: I1209 17:29:57.275446 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw"] Dec 09 17:29:57 crc kubenswrapper[4853]: I1209 17:29:57.296627 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" event={"ID":"7796c327-5952-4b15-a864-511d8f1c75d6","Type":"ContainerStarted","Data":"0ec753d923add03c2953ba55f3a0ffa35ecb65f6f05fa1122243f965d04780c8"} Dec 09 17:29:57 crc kubenswrapper[4853]: E1209 17:29:57.568650 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:29:58 crc kubenswrapper[4853]: I1209 17:29:58.307576 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" event={"ID":"7796c327-5952-4b15-a864-511d8f1c75d6","Type":"ContainerStarted","Data":"884d494bc9308378ef6264b4bbd860dbf1e3c20d9ace47493ff2383830e07605"} Dec 09 17:29:58 crc kubenswrapper[4853]: I1209 17:29:58.332403 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" podStartSLOduration=1.624499024 podStartE2EDuration="2.332383261s" podCreationTimestamp="2025-12-09 17:29:56 +0000 UTC" firstStartedPulling="2025-12-09 17:29:57.272700389 +0000 UTC m=+2024.207439561" lastFinishedPulling="2025-12-09 17:29:57.980584616 +0000 UTC m=+2024.915323798" observedRunningTime="2025-12-09 17:29:58.321883491 +0000 UTC m=+2025.256622673" watchObservedRunningTime="2025-12-09 17:29:58.332383261 +0000 UTC m=+2025.267122443" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.138643 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682"] Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.144011 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.161176 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.161304 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.181217 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682"] Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.223831 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-config-volume\") pod \"collect-profiles-29421690-xb682\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.224704 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fgs8\" (UniqueName: \"kubernetes.io/projected/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-kube-api-access-6fgs8\") pod \"collect-profiles-29421690-xb682\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.224784 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-secret-volume\") pod \"collect-profiles-29421690-xb682\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.327116 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fgs8\" (UniqueName: \"kubernetes.io/projected/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-kube-api-access-6fgs8\") pod \"collect-profiles-29421690-xb682\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.327167 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-secret-volume\") pod \"collect-profiles-29421690-xb682\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.327215 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-config-volume\") pod \"collect-profiles-29421690-xb682\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.328443 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-config-volume\") pod \"collect-profiles-29421690-xb682\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.333242 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-secret-volume\") pod \"collect-profiles-29421690-xb682\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.346787 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fgs8\" (UniqueName: \"kubernetes.io/projected/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-kube-api-access-6fgs8\") pod \"collect-profiles-29421690-xb682\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.370560 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8w92" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="registry-server" probeResult="failure" output=< Dec 09 17:30:00 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 17:30:00 crc kubenswrapper[4853]: > Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.495314 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:00 crc kubenswrapper[4853]: I1209 17:30:00.987000 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682"] Dec 09 17:30:00 crc kubenswrapper[4853]: W1209 17:30:00.987454 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc049bac8_63ef_4c10_a7b8_a2c79dbb2f4e.slice/crio-10cc6d8caca18d3eee3c10b29662bb04e27fe1f430ad53178dca4a0d6977ff3d WatchSource:0}: Error finding container 10cc6d8caca18d3eee3c10b29662bb04e27fe1f430ad53178dca4a0d6977ff3d: Status 404 returned error can't find the container with id 10cc6d8caca18d3eee3c10b29662bb04e27fe1f430ad53178dca4a0d6977ff3d Dec 09 17:30:01 crc kubenswrapper[4853]: I1209 17:30:01.343623 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" event={"ID":"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e","Type":"ContainerStarted","Data":"7d348ac22ade259af3a232af1ad391e3dba535536ccfe024bd03c9b5ee7b2195"} Dec 09 17:30:01 crc kubenswrapper[4853]: I1209 17:30:01.344072 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" event={"ID":"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e","Type":"ContainerStarted","Data":"10cc6d8caca18d3eee3c10b29662bb04e27fe1f430ad53178dca4a0d6977ff3d"} Dec 09 17:30:01 crc kubenswrapper[4853]: I1209 17:30:01.379040 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" podStartSLOduration=1.37901864 podStartE2EDuration="1.37901864s" podCreationTimestamp="2025-12-09 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 17:30:01.365177748 +0000 UTC m=+2028.299916930" watchObservedRunningTime="2025-12-09 17:30:01.37901864 +0000 UTC m=+2028.313757832" Dec 09 17:30:01 crc kubenswrapper[4853]: E1209 17:30:01.570318 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:30:02 crc kubenswrapper[4853]: I1209 17:30:02.355380 4853 generic.go:334] "Generic (PLEG): container finished" podID="c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e" containerID="7d348ac22ade259af3a232af1ad391e3dba535536ccfe024bd03c9b5ee7b2195" exitCode=0 Dec 09 17:30:02 crc kubenswrapper[4853]: I1209 17:30:02.355495 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" event={"ID":"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e","Type":"ContainerDied","Data":"7d348ac22ade259af3a232af1ad391e3dba535536ccfe024bd03c9b5ee7b2195"} Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.751443 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.824502 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-secret-volume\") pod \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.824733 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-config-volume\") pod \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.825232 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e" (UID: "c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.825403 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fgs8\" (UniqueName: \"kubernetes.io/projected/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-kube-api-access-6fgs8\") pod \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\" (UID: \"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e\") " Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.827157 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.833836 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-kube-api-access-6fgs8" (OuterVolumeSpecName: "kube-api-access-6fgs8") pod "c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e" (UID: "c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e"). InnerVolumeSpecName "kube-api-access-6fgs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.834869 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e" (UID: "c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.929247 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:03 crc kubenswrapper[4853]: I1209 17:30:03.929295 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fgs8\" (UniqueName: \"kubernetes.io/projected/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e-kube-api-access-6fgs8\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:04 crc kubenswrapper[4853]: I1209 17:30:04.379216 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" event={"ID":"c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e","Type":"ContainerDied","Data":"10cc6d8caca18d3eee3c10b29662bb04e27fe1f430ad53178dca4a0d6977ff3d"} Dec 09 17:30:04 crc kubenswrapper[4853]: I1209 17:30:04.379524 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10cc6d8caca18d3eee3c10b29662bb04e27fe1f430ad53178dca4a0d6977ff3d" Dec 09 17:30:04 crc kubenswrapper[4853]: I1209 17:30:04.379290 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682" Dec 09 17:30:04 crc kubenswrapper[4853]: I1209 17:30:04.458081 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz"] Dec 09 17:30:04 crc kubenswrapper[4853]: I1209 17:30:04.472047 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421645-4bbwz"] Dec 09 17:30:05 crc kubenswrapper[4853]: I1209 17:30:05.589116 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4118aad-5782-4909-a5df-28f0f772ef10" path="/var/lib/kubelet/pods/a4118aad-5782-4909-a5df-28f0f772ef10/volumes" Dec 09 17:30:08 crc kubenswrapper[4853]: I1209 17:30:08.049519 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rhs9j"] Dec 09 17:30:08 crc kubenswrapper[4853]: I1209 17:30:08.066612 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rhs9j"] Dec 09 17:30:09 crc kubenswrapper[4853]: I1209 17:30:09.339150 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:30:09 crc kubenswrapper[4853]: I1209 17:30:09.398472 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:30:09 crc kubenswrapper[4853]: I1209 17:30:09.580335 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6e5f07-60db-4bae-9f04-8c5915067796" path="/var/lib/kubelet/pods/4c6e5f07-60db-4bae-9f04-8c5915067796/volumes" Dec 09 17:30:10 crc kubenswrapper[4853]: I1209 17:30:10.151528 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8w92"] Dec 09 17:30:10 crc kubenswrapper[4853]: I1209 17:30:10.447704 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t8w92" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="registry-server" containerID="cri-o://bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee" gracePeriod=2 Dec 09 17:30:10 crc kubenswrapper[4853]: I1209 17:30:10.950324 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.021985 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-catalog-content\") pod \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.022112 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fdk8\" (UniqueName: \"kubernetes.io/projected/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-kube-api-access-2fdk8\") pod \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.022189 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-utilities\") pod \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\" (UID: \"fdc047b9-c02f-472f-9f46-bfb19a12cfb7\") " Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.022922 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-utilities" (OuterVolumeSpecName: "utilities") pod "fdc047b9-c02f-472f-9f46-bfb19a12cfb7" (UID: "fdc047b9-c02f-472f-9f46-bfb19a12cfb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.028743 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-kube-api-access-2fdk8" (OuterVolumeSpecName: "kube-api-access-2fdk8") pod "fdc047b9-c02f-472f-9f46-bfb19a12cfb7" (UID: "fdc047b9-c02f-472f-9f46-bfb19a12cfb7"). InnerVolumeSpecName "kube-api-access-2fdk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.124413 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fdk8\" (UniqueName: \"kubernetes.io/projected/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-kube-api-access-2fdk8\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.124678 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.148844 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdc047b9-c02f-472f-9f46-bfb19a12cfb7" (UID: "fdc047b9-c02f-472f-9f46-bfb19a12cfb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.226434 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdc047b9-c02f-472f-9f46-bfb19a12cfb7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.468007 4853 generic.go:334] "Generic (PLEG): container finished" podID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerID="bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee" exitCode=0 Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.468059 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8w92" event={"ID":"fdc047b9-c02f-472f-9f46-bfb19a12cfb7","Type":"ContainerDied","Data":"bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee"} Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.468109 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8w92" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.468133 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8w92" event={"ID":"fdc047b9-c02f-472f-9f46-bfb19a12cfb7","Type":"ContainerDied","Data":"587af48c56119c5665d343886f2769b05ef05a531d333ba7bc07687818125f1f"} Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.468164 4853 scope.go:117] "RemoveContainer" containerID="bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.518103 4853 scope.go:117] "RemoveContainer" containerID="91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.520049 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8w92"] Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.536169 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t8w92"] Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.565367 4853 scope.go:117] "RemoveContainer" containerID="a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427" Dec 09 17:30:11 crc kubenswrapper[4853]: E1209 17:30:11.572040 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.605667 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" path="/var/lib/kubelet/pods/fdc047b9-c02f-472f-9f46-bfb19a12cfb7/volumes" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.638793 4853 scope.go:117] "RemoveContainer" containerID="bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee" Dec 09 17:30:11 crc kubenswrapper[4853]: E1209 17:30:11.639350 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee\": container with ID starting with bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee not found: ID does not exist" containerID="bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.639390 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee"} err="failed to get container status \"bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee\": rpc error: code = NotFound desc = could not find container \"bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee\": container with ID starting with bd2db3f759a2667c264b7089980785cf2d961ef184082b83e0b2ba477159b8ee not found: ID does not exist" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.639412 4853 scope.go:117] "RemoveContainer" containerID="91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087" Dec 09 17:30:11 crc kubenswrapper[4853]: E1209 17:30:11.640132 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087\": container with ID starting with 91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087 not found: ID does not exist" containerID="91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.640167 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087"} err="failed to get container status \"91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087\": rpc error: code = NotFound desc = could not find container \"91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087\": container with ID starting with 91d102abb9af8ca3eaf13dbb5bf07f349be6a2075d23b3a48f60491a403f7087 not found: ID does not exist" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.640196 4853 scope.go:117] "RemoveContainer" containerID="a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427" Dec 09 17:30:11 crc kubenswrapper[4853]: E1209 17:30:11.640712 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427\": container with ID starting with a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427 not found: ID does not exist" containerID="a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427" Dec 09 17:30:11 crc kubenswrapper[4853]: I1209 17:30:11.640772 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427"} err="failed to get container status \"a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427\": rpc error: code = NotFound desc = could not find container \"a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427\": container with ID starting with a408e31f908605aee76b5134670d4c8be5ac8e74eb1904bd195cfb4b6fdef427 not found: ID does not exist" Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.053077 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-6be2-account-create-update-4g2xf"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.062911 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e400-account-create-update-2b57n"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.079396 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4186-account-create-update-cdtpf"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.091008 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9dphq"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.101177 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wpnd5"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.111191 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-cxxpb"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.121495 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-573f-account-create-update-gq8pp"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.132097 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e400-account-create-update-2b57n"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.141920 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-6be2-account-create-update-4g2xf"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.151721 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9dphq"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.161845 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4186-account-create-update-cdtpf"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.171639 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-cxxpb"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.182008 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wpnd5"] Dec 09 17:30:12 crc kubenswrapper[4853]: I1209 17:30:12.191885 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-573f-account-create-update-gq8pp"] Dec 09 17:30:13 crc kubenswrapper[4853]: I1209 17:30:13.588880 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea83a61-c2d2-44f7-86a2-fe7279fc4b85" path="/var/lib/kubelet/pods/2ea83a61-c2d2-44f7-86a2-fe7279fc4b85/volumes" Dec 09 17:30:13 crc kubenswrapper[4853]: I1209 17:30:13.589514 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309dcb32-e680-454f-a815-05e689a3f35e" path="/var/lib/kubelet/pods/309dcb32-e680-454f-a815-05e689a3f35e/volumes" Dec 09 17:30:13 crc kubenswrapper[4853]: I1209 17:30:13.590531 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4818586d-6c0a-4b51-acf3-51605cd25d5f" path="/var/lib/kubelet/pods/4818586d-6c0a-4b51-acf3-51605cd25d5f/volumes" Dec 09 17:30:13 crc kubenswrapper[4853]: E1209 17:30:13.590825 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:30:13 crc kubenswrapper[4853]: I1209 17:30:13.591531 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bdeb0f7-5749-4ef1-baca-b0e6f992c48f" path="/var/lib/kubelet/pods/6bdeb0f7-5749-4ef1-baca-b0e6f992c48f/volumes" Dec 09 17:30:13 crc kubenswrapper[4853]: I1209 17:30:13.592773 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99cf6f02-4548-4822-8cea-219f8f35db7d" path="/var/lib/kubelet/pods/99cf6f02-4548-4822-8cea-219f8f35db7d/volumes" Dec 09 17:30:13 crc kubenswrapper[4853]: I1209 17:30:13.593489 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2e1f43-e047-4825-9457-a3a9bcfba205" path="/var/lib/kubelet/pods/9d2e1f43-e047-4825-9457-a3a9bcfba205/volumes" Dec 09 17:30:13 crc kubenswrapper[4853]: I1209 17:30:13.594186 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d" path="/var/lib/kubelet/pods/d853bf1e-8a4b-4ac7-8165-c0fcc3d7357d/volumes" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.240162 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sgk7h"] Dec 09 17:30:14 crc kubenswrapper[4853]: E1209 17:30:14.241043 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e" containerName="collect-profiles" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.241067 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e" containerName="collect-profiles" Dec 09 17:30:14 crc kubenswrapper[4853]: E1209 17:30:14.241087 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="extract-content" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.241096 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="extract-content" Dec 09 17:30:14 crc kubenswrapper[4853]: E1209 17:30:14.241121 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="extract-utilities" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.241129 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="extract-utilities" Dec 09 17:30:14 crc kubenswrapper[4853]: E1209 17:30:14.241182 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="registry-server" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.241193 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="registry-server" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.241514 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e" containerName="collect-profiles" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.241543 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc047b9-c02f-472f-9f46-bfb19a12cfb7" containerName="registry-server" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.244050 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.251732 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sgk7h"] Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.321056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-catalog-content\") pod \"community-operators-sgk7h\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.321210 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-utilities\") pod \"community-operators-sgk7h\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.321313 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27ws\" (UniqueName: \"kubernetes.io/projected/98958ce8-eace-467c-b9d9-8e2bbb5041d9-kube-api-access-t27ws\") pod \"community-operators-sgk7h\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.423574 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-catalog-content\") pod \"community-operators-sgk7h\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.423760 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-utilities\") pod \"community-operators-sgk7h\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.424091 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-catalog-content\") pod \"community-operators-sgk7h\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.424120 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-utilities\") pod \"community-operators-sgk7h\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.424255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t27ws\" (UniqueName: \"kubernetes.io/projected/98958ce8-eace-467c-b9d9-8e2bbb5041d9-kube-api-access-t27ws\") pod \"community-operators-sgk7h\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.453629 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t27ws\" (UniqueName: \"kubernetes.io/projected/98958ce8-eace-467c-b9d9-8e2bbb5041d9-kube-api-access-t27ws\") pod \"community-operators-sgk7h\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:14 crc kubenswrapper[4853]: I1209 17:30:14.565259 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.020463 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sgk7h"] Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.520214 4853 generic.go:334] "Generic (PLEG): container finished" podID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerID="7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5" exitCode=0 Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.520323 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgk7h" event={"ID":"98958ce8-eace-467c-b9d9-8e2bbb5041d9","Type":"ContainerDied","Data":"7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5"} Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.520524 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgk7h" event={"ID":"98958ce8-eace-467c-b9d9-8e2bbb5041d9","Type":"ContainerStarted","Data":"fb939ee4f06ae4e479af4c30b54453d66c0fe660309c45b2778ae33dfa0284be"} Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.522900 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.582704 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ssh5b"] Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.585616 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.602644 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssh5b"] Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.651824 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld5hf\" (UniqueName: \"kubernetes.io/projected/23239832-a253-4323-8567-714a358448b5-kube-api-access-ld5hf\") pod \"certified-operators-ssh5b\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.651936 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-catalog-content\") pod \"certified-operators-ssh5b\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.652429 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-utilities\") pod \"certified-operators-ssh5b\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.754555 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld5hf\" (UniqueName: \"kubernetes.io/projected/23239832-a253-4323-8567-714a358448b5-kube-api-access-ld5hf\") pod \"certified-operators-ssh5b\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.754724 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-catalog-content\") pod \"certified-operators-ssh5b\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.754965 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-utilities\") pod \"certified-operators-ssh5b\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.755137 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-catalog-content\") pod \"certified-operators-ssh5b\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.755445 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-utilities\") pod \"certified-operators-ssh5b\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.775548 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld5hf\" (UniqueName: \"kubernetes.io/projected/23239832-a253-4323-8567-714a358448b5-kube-api-access-ld5hf\") pod \"certified-operators-ssh5b\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:15 crc kubenswrapper[4853]: I1209 17:30:15.920845 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.381228 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssh5b"] Dec 09 17:30:16 crc kubenswrapper[4853]: W1209 17:30:16.388827 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23239832_a253_4323_8567_714a358448b5.slice/crio-98e9a9079dcdca25105f4d1f0b780589e4fc453770b8accdfeba38022343df10 WatchSource:0}: Error finding container 98e9a9079dcdca25105f4d1f0b780589e4fc453770b8accdfeba38022343df10: Status 404 returned error can't find the container with id 98e9a9079dcdca25105f4d1f0b780589e4fc453770b8accdfeba38022343df10 Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.534794 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssh5b" event={"ID":"23239832-a253-4323-8567-714a358448b5","Type":"ContainerStarted","Data":"98e9a9079dcdca25105f4d1f0b780589e4fc453770b8accdfeba38022343df10"} Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.537642 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgk7h" event={"ID":"98958ce8-eace-467c-b9d9-8e2bbb5041d9","Type":"ContainerStarted","Data":"fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722"} Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.567782 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nf2vv"] Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.579312 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.619664 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf2vv"] Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.681036 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4njtx\" (UniqueName: \"kubernetes.io/projected/424cf22a-a62a-49aa-853a-73b5ae86acb7-kube-api-access-4njtx\") pod \"redhat-marketplace-nf2vv\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.681141 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-utilities\") pod \"redhat-marketplace-nf2vv\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.681635 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-catalog-content\") pod \"redhat-marketplace-nf2vv\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.783791 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4njtx\" (UniqueName: \"kubernetes.io/projected/424cf22a-a62a-49aa-853a-73b5ae86acb7-kube-api-access-4njtx\") pod \"redhat-marketplace-nf2vv\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.783862 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-utilities\") pod \"redhat-marketplace-nf2vv\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.784003 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-catalog-content\") pod \"redhat-marketplace-nf2vv\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.784525 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-utilities\") pod \"redhat-marketplace-nf2vv\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.784775 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-catalog-content\") pod \"redhat-marketplace-nf2vv\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.803996 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4njtx\" (UniqueName: \"kubernetes.io/projected/424cf22a-a62a-49aa-853a-73b5ae86acb7-kube-api-access-4njtx\") pod \"redhat-marketplace-nf2vv\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:16 crc kubenswrapper[4853]: I1209 17:30:16.981411 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:17 crc kubenswrapper[4853]: I1209 17:30:17.365759 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf2vv"] Dec 09 17:30:17 crc kubenswrapper[4853]: I1209 17:30:17.553758 4853 generic.go:334] "Generic (PLEG): container finished" podID="23239832-a253-4323-8567-714a358448b5" containerID="5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7" exitCode=0 Dec 09 17:30:17 crc kubenswrapper[4853]: I1209 17:30:17.554077 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssh5b" event={"ID":"23239832-a253-4323-8567-714a358448b5","Type":"ContainerDied","Data":"5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7"} Dec 09 17:30:17 crc kubenswrapper[4853]: I1209 17:30:17.557635 4853 generic.go:334] "Generic (PLEG): container finished" podID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerID="fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722" exitCode=0 Dec 09 17:30:17 crc kubenswrapper[4853]: I1209 17:30:17.557699 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgk7h" event={"ID":"98958ce8-eace-467c-b9d9-8e2bbb5041d9","Type":"ContainerDied","Data":"fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722"} Dec 09 17:30:17 crc kubenswrapper[4853]: I1209 17:30:17.559673 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf2vv" event={"ID":"424cf22a-a62a-49aa-853a-73b5ae86acb7","Type":"ContainerStarted","Data":"1c0921f70060db5c842e114cb458ded4a07c0158f7c091dcb1c71c9d5239651e"} Dec 09 17:30:18 crc kubenswrapper[4853]: I1209 17:30:18.034556 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-vqpmd"] Dec 09 17:30:18 crc kubenswrapper[4853]: I1209 17:30:18.044935 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-vqpmd"] Dec 09 17:30:18 crc kubenswrapper[4853]: I1209 17:30:18.573111 4853 generic.go:334] "Generic (PLEG): container finished" podID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerID="3b3e890bf884c91832d185f6816acc35966d15f9f431285e5e77dac2d0c1218e" exitCode=0 Dec 09 17:30:18 crc kubenswrapper[4853]: I1209 17:30:18.573323 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf2vv" event={"ID":"424cf22a-a62a-49aa-853a-73b5ae86acb7","Type":"ContainerDied","Data":"3b3e890bf884c91832d185f6816acc35966d15f9f431285e5e77dac2d0c1218e"} Dec 09 17:30:18 crc kubenswrapper[4853]: I1209 17:30:18.575741 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssh5b" event={"ID":"23239832-a253-4323-8567-714a358448b5","Type":"ContainerStarted","Data":"4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557"} Dec 09 17:30:19 crc kubenswrapper[4853]: I1209 17:30:19.603763 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfca4d2f-3a00-4f1f-8654-b7ef5333d22f" path="/var/lib/kubelet/pods/bfca4d2f-3a00-4f1f-8654-b7ef5333d22f/volumes" Dec 09 17:30:19 crc kubenswrapper[4853]: I1209 17:30:19.605755 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgk7h" event={"ID":"98958ce8-eace-467c-b9d9-8e2bbb5041d9","Type":"ContainerStarted","Data":"1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd"} Dec 09 17:30:19 crc kubenswrapper[4853]: I1209 17:30:19.616043 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sgk7h" podStartSLOduration=2.764998574 podStartE2EDuration="5.616011678s" podCreationTimestamp="2025-12-09 17:30:14 +0000 UTC" firstStartedPulling="2025-12-09 17:30:15.52248162 +0000 UTC m=+2042.457220822" lastFinishedPulling="2025-12-09 17:30:18.373494744 +0000 UTC m=+2045.308233926" observedRunningTime="2025-12-09 17:30:19.610113625 +0000 UTC m=+2046.544852807" watchObservedRunningTime="2025-12-09 17:30:19.616011678 +0000 UTC m=+2046.550750860" Dec 09 17:30:20 crc kubenswrapper[4853]: I1209 17:30:20.038623 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-tvtzt"] Dec 09 17:30:20 crc kubenswrapper[4853]: I1209 17:30:20.051095 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-tvtzt"] Dec 09 17:30:20 crc kubenswrapper[4853]: I1209 17:30:20.608219 4853 generic.go:334] "Generic (PLEG): container finished" podID="23239832-a253-4323-8567-714a358448b5" containerID="4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557" exitCode=0 Dec 09 17:30:20 crc kubenswrapper[4853]: I1209 17:30:20.608693 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssh5b" event={"ID":"23239832-a253-4323-8567-714a358448b5","Type":"ContainerDied","Data":"4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557"} Dec 09 17:30:20 crc kubenswrapper[4853]: I1209 17:30:20.615008 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf2vv" event={"ID":"424cf22a-a62a-49aa-853a-73b5ae86acb7","Type":"ContainerStarted","Data":"65fdfe43a925abc711646dac4ade7b03d35290dcd36db240f9ce0d7f79e7419a"} Dec 09 17:30:21 crc kubenswrapper[4853]: I1209 17:30:21.580265 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="704f2e28-f375-4a95-a680-87e1bcb93058" path="/var/lib/kubelet/pods/704f2e28-f375-4a95-a680-87e1bcb93058/volumes" Dec 09 17:30:21 crc kubenswrapper[4853]: I1209 17:30:21.632768 4853 generic.go:334] "Generic (PLEG): container finished" podID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerID="65fdfe43a925abc711646dac4ade7b03d35290dcd36db240f9ce0d7f79e7419a" exitCode=0 Dec 09 17:30:21 crc kubenswrapper[4853]: I1209 17:30:21.632861 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf2vv" event={"ID":"424cf22a-a62a-49aa-853a-73b5ae86acb7","Type":"ContainerDied","Data":"65fdfe43a925abc711646dac4ade7b03d35290dcd36db240f9ce0d7f79e7419a"} Dec 09 17:30:22 crc kubenswrapper[4853]: I1209 17:30:22.663009 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssh5b" event={"ID":"23239832-a253-4323-8567-714a358448b5","Type":"ContainerStarted","Data":"a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e"} Dec 09 17:30:22 crc kubenswrapper[4853]: I1209 17:30:22.688757 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ssh5b" podStartSLOduration=3.866885644 podStartE2EDuration="7.688738848s" podCreationTimestamp="2025-12-09 17:30:15 +0000 UTC" firstStartedPulling="2025-12-09 17:30:17.556256401 +0000 UTC m=+2044.490995583" lastFinishedPulling="2025-12-09 17:30:21.378109605 +0000 UTC m=+2048.312848787" observedRunningTime="2025-12-09 17:30:22.686078714 +0000 UTC m=+2049.620817896" watchObservedRunningTime="2025-12-09 17:30:22.688738848 +0000 UTC m=+2049.623478030" Dec 09 17:30:23 crc kubenswrapper[4853]: I1209 17:30:23.704119 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nf2vv" podStartSLOduration=2.811519547 podStartE2EDuration="7.704096355s" podCreationTimestamp="2025-12-09 17:30:16 +0000 UTC" firstStartedPulling="2025-12-09 17:30:18.576387981 +0000 UTC m=+2045.511127163" lastFinishedPulling="2025-12-09 17:30:23.468964789 +0000 UTC m=+2050.403703971" observedRunningTime="2025-12-09 17:30:23.703844429 +0000 UTC m=+2050.638583601" watchObservedRunningTime="2025-12-09 17:30:23.704096355 +0000 UTC m=+2050.638835527" Dec 09 17:30:24 crc kubenswrapper[4853]: I1209 17:30:24.566135 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:24 crc kubenswrapper[4853]: I1209 17:30:24.566466 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:24 crc kubenswrapper[4853]: I1209 17:30:24.620095 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:24 crc kubenswrapper[4853]: I1209 17:30:24.694108 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf2vv" event={"ID":"424cf22a-a62a-49aa-853a-73b5ae86acb7","Type":"ContainerStarted","Data":"241e42d112c216dd71f83095557ae3af0d645b64ab6d1e7699f992f2345168e2"} Dec 09 17:30:24 crc kubenswrapper[4853]: I1209 17:30:24.751571 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:25 crc kubenswrapper[4853]: I1209 17:30:25.921362 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:25 crc kubenswrapper[4853]: I1209 17:30:25.921709 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:25 crc kubenswrapper[4853]: I1209 17:30:25.983743 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.303456 4853 scope.go:117] "RemoveContainer" containerID="61b393f420cfcd4fae0768f517883770b7a413242ee2a72404f9e7eaec7a8b07" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.344477 4853 scope.go:117] "RemoveContainer" containerID="f29959f992e411c94b3189e6fcc5f1a5a272ceab6fd7e73c6e582057f41555a4" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.404384 4853 scope.go:117] "RemoveContainer" containerID="5177ef060f920e68cdca536523b5eb38e8d1770cf3a3736255b82cf78e9d8c26" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.471931 4853 scope.go:117] "RemoveContainer" containerID="cc6080e75e72f735b8ce1a571713c40291fa23d00c62971a2f006c4b3cfb3ca0" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.529514 4853 scope.go:117] "RemoveContainer" containerID="7591ff75fe2fe274b6d74ae2f84daded079492052474677a40990897961cee13" Dec 09 17:30:26 crc kubenswrapper[4853]: E1209 17:30:26.569290 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.588094 4853 scope.go:117] "RemoveContainer" containerID="ad96ab60fbf8d1862c5e2f009640a778a2aa97ad4ca2a8e5378c341257b40bcb" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.613264 4853 scope.go:117] "RemoveContainer" containerID="99eef2dd2dcfdd11660903a3442c1d3f0e251da047a01d7a5162830f83702329" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.636543 4853 scope.go:117] "RemoveContainer" containerID="2690f02c51eb5bd10362927c848067cfd3c4487fa7afa88c35c89a0ddfb1d1ac" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.692056 4853 scope.go:117] "RemoveContainer" containerID="d3bf6d47b9579e8c73f721fbc39cfa67218de2ab2aff485cd76c37637cdab1ad" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.713451 4853 scope.go:117] "RemoveContainer" containerID="d5c610012a8b7131ab4438259a745df256bb522a879b8f3f43f5a3126e163f67" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.747820 4853 scope.go:117] "RemoveContainer" containerID="49b24ea6857cf3c0600bfedea3e664b0522da9b5fb8aa28138211d1d2841d11f" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.781331 4853 scope.go:117] "RemoveContainer" containerID="5557a8ed4d24736afde9e48642f7b6abd22d43f208550189969f19bc03cab7bf" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.792399 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.801651 4853 scope.go:117] "RemoveContainer" containerID="9b28d6a2316fdfede38c842c97fcecd17b12a72d983339b2e01c0b08d05193c8" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.837330 4853 scope.go:117] "RemoveContainer" containerID="f9e6ec9df1238d57904d954b0e54fb64b85391ad02465d2e98018bef8a0f9982" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.859355 4853 scope.go:117] "RemoveContainer" containerID="08d067cd23c628d314ccfeed5149e0d5bbfcb9c085541a0e898ec8eefc7e0b06" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.892914 4853 scope.go:117] "RemoveContainer" containerID="f75b2e07c359ce6a7266a042ce0c0b76b9cfd87bb72d4f253b9f41c501cf6390" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.912058 4853 scope.go:117] "RemoveContainer" containerID="be3cebdbbf20436d8dc55e114f62cee403ced6ef5f6c706fc3a95ee0d299360e" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.935671 4853 scope.go:117] "RemoveContainer" containerID="260d49b2f46ce80b2cd4512a948f5a8b328e10ba10a48ee87d77f096f27965f5" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.967879 4853 scope.go:117] "RemoveContainer" containerID="fb54a58d8e849681e3a0d3b9ee770ea0780a9ef0ed267397635af7ffa856a5bd" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.981478 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:26 crc kubenswrapper[4853]: I1209 17:30:26.982733 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.048083 4853 scope.go:117] "RemoveContainer" containerID="e4b4b4ebefa767939655f409111f5d21594b364b729ac15bce205a399a0f4fbe" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.107193 4853 scope.go:117] "RemoveContainer" containerID="7b33d205692c9cfcf00bbf6bac9b579bec7c25b59d488629152e7697c1697ad8" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.119029 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.169964 4853 scope.go:117] "RemoveContainer" containerID="3292ad8070a7a77535cd8f0075363bae040c18c86bc2edcd1c123e6abfb03b28" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.206395 4853 scope.go:117] "RemoveContainer" containerID="c8eca91b3d10d751ba9044539a565ccf93bf2b6850494f542dc05613a111b270" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.230817 4853 scope.go:117] "RemoveContainer" containerID="87c96d96b21183333ddbefe7f28755baf92d84f54db5b67f452b8b72439a4799" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.265755 4853 scope.go:117] "RemoveContainer" containerID="48d4c4c26bfb4b066ab7d5ad76560feb6d239a5646547c91dc33c6b2d1172b1a" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.289753 4853 scope.go:117] "RemoveContainer" containerID="5c4d4c5782818a71fc3061056fbff6a0d54f8c8975600bb8c22a43288dbab5a6" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.314188 4853 scope.go:117] "RemoveContainer" containerID="1552801835f6066e074352376905312bad62c3e478b8613784d0612de9f8e602" Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.959073 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sgk7h"] Dec 09 17:30:27 crc kubenswrapper[4853]: I1209 17:30:27.959512 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sgk7h" podUID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerName="registry-server" containerID="cri-o://1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd" gracePeriod=2 Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.560258 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:28 crc kubenswrapper[4853]: E1209 17:30:28.573461 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.688253 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-utilities\") pod \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.688474 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t27ws\" (UniqueName: \"kubernetes.io/projected/98958ce8-eace-467c-b9d9-8e2bbb5041d9-kube-api-access-t27ws\") pod \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.688714 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-catalog-content\") pod \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\" (UID: \"98958ce8-eace-467c-b9d9-8e2bbb5041d9\") " Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.692484 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-utilities" (OuterVolumeSpecName: "utilities") pod "98958ce8-eace-467c-b9d9-8e2bbb5041d9" (UID: "98958ce8-eace-467c-b9d9-8e2bbb5041d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.698834 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98958ce8-eace-467c-b9d9-8e2bbb5041d9-kube-api-access-t27ws" (OuterVolumeSpecName: "kube-api-access-t27ws") pod "98958ce8-eace-467c-b9d9-8e2bbb5041d9" (UID: "98958ce8-eace-467c-b9d9-8e2bbb5041d9"). InnerVolumeSpecName "kube-api-access-t27ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.748334 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98958ce8-eace-467c-b9d9-8e2bbb5041d9" (UID: "98958ce8-eace-467c-b9d9-8e2bbb5041d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.778512 4853 generic.go:334] "Generic (PLEG): container finished" podID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerID="1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd" exitCode=0 Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.778627 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgk7h" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.778653 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgk7h" event={"ID":"98958ce8-eace-467c-b9d9-8e2bbb5041d9","Type":"ContainerDied","Data":"1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd"} Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.778734 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgk7h" event={"ID":"98958ce8-eace-467c-b9d9-8e2bbb5041d9","Type":"ContainerDied","Data":"fb939ee4f06ae4e479af4c30b54453d66c0fe660309c45b2778ae33dfa0284be"} Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.778770 4853 scope.go:117] "RemoveContainer" containerID="1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.792478 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.792751 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t27ws\" (UniqueName: \"kubernetes.io/projected/98958ce8-eace-467c-b9d9-8e2bbb5041d9-kube-api-access-t27ws\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.792767 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98958ce8-eace-467c-b9d9-8e2bbb5041d9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.814848 4853 scope.go:117] "RemoveContainer" containerID="fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.818271 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sgk7h"] Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.828368 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sgk7h"] Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.835285 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.840838 4853 scope.go:117] "RemoveContainer" containerID="7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.898370 4853 scope.go:117] "RemoveContainer" containerID="1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd" Dec 09 17:30:28 crc kubenswrapper[4853]: E1209 17:30:28.898808 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd\": container with ID starting with 1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd not found: ID does not exist" containerID="1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.898841 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd"} err="failed to get container status \"1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd\": rpc error: code = NotFound desc = could not find container \"1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd\": container with ID starting with 1ebe59597ec06a5bc407dbd642555eb635ae5531a8207345621c78d5b4ebd0cd not found: ID does not exist" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.898861 4853 scope.go:117] "RemoveContainer" containerID="fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722" Dec 09 17:30:28 crc kubenswrapper[4853]: E1209 17:30:28.899232 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722\": container with ID starting with fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722 not found: ID does not exist" containerID="fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.899257 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722"} err="failed to get container status \"fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722\": rpc error: code = NotFound desc = could not find container \"fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722\": container with ID starting with fc1c72d97c4cd30f5b9da58b5192bd22a6b51d7ad815b79c5297ec4621a50722 not found: ID does not exist" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.899270 4853 scope.go:117] "RemoveContainer" containerID="7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5" Dec 09 17:30:28 crc kubenswrapper[4853]: E1209 17:30:28.899489 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5\": container with ID starting with 7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5 not found: ID does not exist" containerID="7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.899511 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5"} err="failed to get container status \"7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5\": rpc error: code = NotFound desc = could not find container \"7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5\": container with ID starting with 7ea48e38921080f9c90c595f5ba695629c38242dc262410fadab7361c522aee5 not found: ID does not exist" Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.951808 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ssh5b"] Dec 09 17:30:28 crc kubenswrapper[4853]: I1209 17:30:28.952081 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ssh5b" podUID="23239832-a253-4323-8567-714a358448b5" containerName="registry-server" containerID="cri-o://a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e" gracePeriod=2 Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.557735 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.584972 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" path="/var/lib/kubelet/pods/98958ce8-eace-467c-b9d9-8e2bbb5041d9/volumes" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.713137 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-utilities\") pod \"23239832-a253-4323-8567-714a358448b5\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.713186 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld5hf\" (UniqueName: \"kubernetes.io/projected/23239832-a253-4323-8567-714a358448b5-kube-api-access-ld5hf\") pod \"23239832-a253-4323-8567-714a358448b5\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.713325 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-catalog-content\") pod \"23239832-a253-4323-8567-714a358448b5\" (UID: \"23239832-a253-4323-8567-714a358448b5\") " Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.713790 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-utilities" (OuterVolumeSpecName: "utilities") pod "23239832-a253-4323-8567-714a358448b5" (UID: "23239832-a253-4323-8567-714a358448b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.714401 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.723951 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23239832-a253-4323-8567-714a358448b5-kube-api-access-ld5hf" (OuterVolumeSpecName: "kube-api-access-ld5hf") pod "23239832-a253-4323-8567-714a358448b5" (UID: "23239832-a253-4323-8567-714a358448b5"). InnerVolumeSpecName "kube-api-access-ld5hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.760991 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23239832-a253-4323-8567-714a358448b5" (UID: "23239832-a253-4323-8567-714a358448b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.793925 4853 generic.go:334] "Generic (PLEG): container finished" podID="23239832-a253-4323-8567-714a358448b5" containerID="a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e" exitCode=0 Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.793988 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssh5b" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.794003 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssh5b" event={"ID":"23239832-a253-4323-8567-714a358448b5","Type":"ContainerDied","Data":"a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e"} Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.794337 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssh5b" event={"ID":"23239832-a253-4323-8567-714a358448b5","Type":"ContainerDied","Data":"98e9a9079dcdca25105f4d1f0b780589e4fc453770b8accdfeba38022343df10"} Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.794360 4853 scope.go:117] "RemoveContainer" containerID="a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.816976 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld5hf\" (UniqueName: \"kubernetes.io/projected/23239832-a253-4323-8567-714a358448b5-kube-api-access-ld5hf\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.817003 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23239832-a253-4323-8567-714a358448b5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.825307 4853 scope.go:117] "RemoveContainer" containerID="4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.843323 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ssh5b"] Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.864143 4853 scope.go:117] "RemoveContainer" containerID="5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.866587 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ssh5b"] Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.916979 4853 scope.go:117] "RemoveContainer" containerID="a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e" Dec 09 17:30:29 crc kubenswrapper[4853]: E1209 17:30:29.922070 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e\": container with ID starting with a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e not found: ID does not exist" containerID="a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.922109 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e"} err="failed to get container status \"a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e\": rpc error: code = NotFound desc = could not find container \"a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e\": container with ID starting with a1abf69c2c8d062ebc342d772885bef873ad6e526661be7ac3b3166e82caaf6e not found: ID does not exist" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.922137 4853 scope.go:117] "RemoveContainer" containerID="4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557" Dec 09 17:30:29 crc kubenswrapper[4853]: E1209 17:30:29.922438 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557\": container with ID starting with 4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557 not found: ID does not exist" containerID="4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.922472 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557"} err="failed to get container status \"4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557\": rpc error: code = NotFound desc = could not find container \"4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557\": container with ID starting with 4dba22220fe01bb5871071155a302e903b3ebde8068568b75b68a82cc98a2557 not found: ID does not exist" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.922491 4853 scope.go:117] "RemoveContainer" containerID="5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7" Dec 09 17:30:29 crc kubenswrapper[4853]: E1209 17:30:29.922814 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7\": container with ID starting with 5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7 not found: ID does not exist" containerID="5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7" Dec 09 17:30:29 crc kubenswrapper[4853]: I1209 17:30:29.922841 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7"} err="failed to get container status \"5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7\": rpc error: code = NotFound desc = could not find container \"5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7\": container with ID starting with 5a9e6e7cdaa707f4b62ba4f1be4e4052b8c87cf11d6614efcd181be95773c8b7 not found: ID does not exist" Dec 09 17:30:30 crc kubenswrapper[4853]: I1209 17:30:30.354440 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf2vv"] Dec 09 17:30:30 crc kubenswrapper[4853]: I1209 17:30:30.810839 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nf2vv" podUID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerName="registry-server" containerID="cri-o://241e42d112c216dd71f83095557ae3af0d645b64ab6d1e7699f992f2345168e2" gracePeriod=2 Dec 09 17:30:31 crc kubenswrapper[4853]: I1209 17:30:31.585385 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23239832-a253-4323-8567-714a358448b5" path="/var/lib/kubelet/pods/23239832-a253-4323-8567-714a358448b5/volumes" Dec 09 17:30:31 crc kubenswrapper[4853]: I1209 17:30:31.841507 4853 generic.go:334] "Generic (PLEG): container finished" podID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerID="241e42d112c216dd71f83095557ae3af0d645b64ab6d1e7699f992f2345168e2" exitCode=0 Dec 09 17:30:31 crc kubenswrapper[4853]: I1209 17:30:31.841551 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf2vv" event={"ID":"424cf22a-a62a-49aa-853a-73b5ae86acb7","Type":"ContainerDied","Data":"241e42d112c216dd71f83095557ae3af0d645b64ab6d1e7699f992f2345168e2"} Dec 09 17:30:31 crc kubenswrapper[4853]: I1209 17:30:31.841880 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf2vv" event={"ID":"424cf22a-a62a-49aa-853a-73b5ae86acb7","Type":"ContainerDied","Data":"1c0921f70060db5c842e114cb458ded4a07c0158f7c091dcb1c71c9d5239651e"} Dec 09 17:30:31 crc kubenswrapper[4853]: I1209 17:30:31.841900 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0921f70060db5c842e114cb458ded4a07c0158f7c091dcb1c71c9d5239651e" Dec 09 17:30:31 crc kubenswrapper[4853]: I1209 17:30:31.908430 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.084051 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-utilities\") pod \"424cf22a-a62a-49aa-853a-73b5ae86acb7\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.084127 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4njtx\" (UniqueName: \"kubernetes.io/projected/424cf22a-a62a-49aa-853a-73b5ae86acb7-kube-api-access-4njtx\") pod \"424cf22a-a62a-49aa-853a-73b5ae86acb7\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.084303 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-catalog-content\") pod \"424cf22a-a62a-49aa-853a-73b5ae86acb7\" (UID: \"424cf22a-a62a-49aa-853a-73b5ae86acb7\") " Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.084864 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-utilities" (OuterVolumeSpecName: "utilities") pod "424cf22a-a62a-49aa-853a-73b5ae86acb7" (UID: "424cf22a-a62a-49aa-853a-73b5ae86acb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.085337 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.091197 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/424cf22a-a62a-49aa-853a-73b5ae86acb7-kube-api-access-4njtx" (OuterVolumeSpecName: "kube-api-access-4njtx") pod "424cf22a-a62a-49aa-853a-73b5ae86acb7" (UID: "424cf22a-a62a-49aa-853a-73b5ae86acb7"). InnerVolumeSpecName "kube-api-access-4njtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.103722 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "424cf22a-a62a-49aa-853a-73b5ae86acb7" (UID: "424cf22a-a62a-49aa-853a-73b5ae86acb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.188120 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4njtx\" (UniqueName: \"kubernetes.io/projected/424cf22a-a62a-49aa-853a-73b5ae86acb7-kube-api-access-4njtx\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.188155 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424cf22a-a62a-49aa-853a-73b5ae86acb7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.854539 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nf2vv" Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.897457 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf2vv"] Dec 09 17:30:32 crc kubenswrapper[4853]: I1209 17:30:32.907159 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf2vv"] Dec 09 17:30:33 crc kubenswrapper[4853]: I1209 17:30:33.582440 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="424cf22a-a62a-49aa-853a-73b5ae86acb7" path="/var/lib/kubelet/pods/424cf22a-a62a-49aa-853a-73b5ae86acb7/volumes" Dec 09 17:30:39 crc kubenswrapper[4853]: E1209 17:30:39.569880 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:30:41 crc kubenswrapper[4853]: E1209 17:30:41.571135 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:30:51 crc kubenswrapper[4853]: I1209 17:30:51.051977 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4pvgk"] Dec 09 17:30:51 crc kubenswrapper[4853]: I1209 17:30:51.068270 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4pvgk"] Dec 09 17:30:51 crc kubenswrapper[4853]: I1209 17:30:51.581019 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9742aa9-091a-499a-8fa7-49295b5e9488" path="/var/lib/kubelet/pods/b9742aa9-091a-499a-8fa7-49295b5e9488/volumes" Dec 09 17:30:54 crc kubenswrapper[4853]: E1209 17:30:54.721327 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:30:54 crc kubenswrapper[4853]: E1209 17:30:54.721878 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:30:54 crc kubenswrapper[4853]: E1209 17:30:54.722249 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:30:54 crc kubenswrapper[4853]: E1209 17:30:54.723423 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:30:56 crc kubenswrapper[4853]: E1209 17:30:56.692112 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:30:56 crc kubenswrapper[4853]: E1209 17:30:56.692446 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:30:56 crc kubenswrapper[4853]: E1209 17:30:56.692614 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:30:56 crc kubenswrapper[4853]: E1209 17:30:56.693757 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:31:01 crc kubenswrapper[4853]: I1209 17:31:01.036823 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r6pjt"] Dec 09 17:31:01 crc kubenswrapper[4853]: I1209 17:31:01.047997 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r6pjt"] Dec 09 17:31:01 crc kubenswrapper[4853]: I1209 17:31:01.592227 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a31d060-259e-4f8d-bb05-3d6cc9b198d1" path="/var/lib/kubelet/pods/3a31d060-259e-4f8d-bb05-3d6cc9b198d1/volumes" Dec 09 17:31:04 crc kubenswrapper[4853]: I1209 17:31:04.051075 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-696ml"] Dec 09 17:31:04 crc kubenswrapper[4853]: I1209 17:31:04.065971 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-696ml"] Dec 09 17:31:05 crc kubenswrapper[4853]: I1209 17:31:05.030750 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fs9xd"] Dec 09 17:31:05 crc kubenswrapper[4853]: I1209 17:31:05.042679 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fs9xd"] Dec 09 17:31:05 crc kubenswrapper[4853]: I1209 17:31:05.578633 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b7745a-2365-4bf7-951f-2faa6a046b18" path="/var/lib/kubelet/pods/03b7745a-2365-4bf7-951f-2faa6a046b18/volumes" Dec 09 17:31:05 crc kubenswrapper[4853]: I1209 17:31:05.579659 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b86b595d-63e4-41f1-979f-4a82cc01b136" path="/var/lib/kubelet/pods/b86b595d-63e4-41f1-979f-4a82cc01b136/volumes" Dec 09 17:31:07 crc kubenswrapper[4853]: E1209 17:31:07.570262 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:31:11 crc kubenswrapper[4853]: E1209 17:31:11.570012 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:31:19 crc kubenswrapper[4853]: I1209 17:31:19.049748 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tpt7s"] Dec 09 17:31:19 crc kubenswrapper[4853]: I1209 17:31:19.061438 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tpt7s"] Dec 09 17:31:19 crc kubenswrapper[4853]: E1209 17:31:19.573959 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:31:19 crc kubenswrapper[4853]: I1209 17:31:19.584909 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c4cb93-d59f-4160-9e4d-506184f49afe" path="/var/lib/kubelet/pods/18c4cb93-d59f-4160-9e4d-506184f49afe/volumes" Dec 09 17:31:23 crc kubenswrapper[4853]: E1209 17:31:23.577852 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:31:27 crc kubenswrapper[4853]: I1209 17:31:27.947495 4853 scope.go:117] "RemoveContainer" containerID="de8a7d229a4639b95a5f0000d28aaa82f9207711b8e54cacb92a73d4815e752b" Dec 09 17:31:27 crc kubenswrapper[4853]: I1209 17:31:27.976154 4853 scope.go:117] "RemoveContainer" containerID="3a77dacf3400d5080df7efe79b77a51be064e428b0c5f8ae5548519158df0a4d" Dec 09 17:31:28 crc kubenswrapper[4853]: I1209 17:31:28.029304 4853 scope.go:117] "RemoveContainer" containerID="782ea6c9cdd1d1202ff5bd634bd800810fe1aa8d95ac496fb687f6d92285cbe8" Dec 09 17:31:28 crc kubenswrapper[4853]: I1209 17:31:28.085162 4853 scope.go:117] "RemoveContainer" containerID="61387370b3b8eb20b7bcedcd8bd1d63a30e0efc3992aea872599fe312754db22" Dec 09 17:31:28 crc kubenswrapper[4853]: I1209 17:31:28.144015 4853 scope.go:117] "RemoveContainer" containerID="4fd5170f06d48368f9a3a457b88be65264b2380f3cd23b303926251e091dcbef" Dec 09 17:31:28 crc kubenswrapper[4853]: I1209 17:31:28.201056 4853 scope.go:117] "RemoveContainer" containerID="12ee27b5cf3e933f7b7d978df64ec9968b0b64432080cebbacd81640db1b0dca" Dec 09 17:31:28 crc kubenswrapper[4853]: I1209 17:31:28.258790 4853 scope.go:117] "RemoveContainer" containerID="f05f45d4d7c286f77bc1705355643879b0b1895d5c34a856b4a721b2d94b857e" Dec 09 17:31:28 crc kubenswrapper[4853]: I1209 17:31:28.592574 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:31:28 crc kubenswrapper[4853]: I1209 17:31:28.592898 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:31:33 crc kubenswrapper[4853]: E1209 17:31:33.580544 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:31:34 crc kubenswrapper[4853]: E1209 17:31:34.570112 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:31:46 crc kubenswrapper[4853]: E1209 17:31:46.569010 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:31:49 crc kubenswrapper[4853]: E1209 17:31:49.571648 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:31:57 crc kubenswrapper[4853]: I1209 17:31:57.058009 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9g9jw"] Dec 09 17:31:57 crc kubenswrapper[4853]: I1209 17:31:57.073187 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-v4hn8"] Dec 09 17:31:57 crc kubenswrapper[4853]: I1209 17:31:57.087584 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9g9jw"] Dec 09 17:31:57 crc kubenswrapper[4853]: I1209 17:31:57.104206 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-v4hn8"] Dec 09 17:31:57 crc kubenswrapper[4853]: I1209 17:31:57.584935 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b35a27-0dc5-4c08-9505-f81db9987470" path="/var/lib/kubelet/pods/24b35a27-0dc5-4c08-9505-f81db9987470/volumes" Dec 09 17:31:57 crc kubenswrapper[4853]: I1209 17:31:57.586438 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f263ce7c-227e-4025-af32-6bf4176920f7" path="/var/lib/kubelet/pods/f263ce7c-227e-4025-af32-6bf4176920f7/volumes" Dec 09 17:31:58 crc kubenswrapper[4853]: I1209 17:31:58.592788 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:31:58 crc kubenswrapper[4853]: I1209 17:31:58.592850 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:31:59 crc kubenswrapper[4853]: E1209 17:31:59.570475 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:32:00 crc kubenswrapper[4853]: E1209 17:32:00.568701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:32:02 crc kubenswrapper[4853]: I1209 17:32:02.037042 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-m5x9t"] Dec 09 17:32:02 crc kubenswrapper[4853]: I1209 17:32:02.052753 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-m5x9t"] Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.049306 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9831-account-create-update-t5gr7"] Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.072027 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-eb70-account-create-update-tmkjg"] Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.084395 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2a5a-account-create-update-dxj42"] Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.099493 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9831-account-create-update-t5gr7"] Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.111789 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2a5a-account-create-update-dxj42"] Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.121578 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-eb70-account-create-update-tmkjg"] Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.586894 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a2dc7d-7b15-48fb-96eb-f11ff9023399" path="/var/lib/kubelet/pods/02a2dc7d-7b15-48fb-96eb-f11ff9023399/volumes" Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.589280 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a032d4f-5922-4085-8751-f413c1087e58" path="/var/lib/kubelet/pods/2a032d4f-5922-4085-8751-f413c1087e58/volumes" Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.591151 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330df8bb-e48c-4b61-9cbd-69de9a1a1453" path="/var/lib/kubelet/pods/330df8bb-e48c-4b61-9cbd-69de9a1a1453/volumes" Dec 09 17:32:03 crc kubenswrapper[4853]: I1209 17:32:03.592338 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc14b612-092a-43a4-affb-95ff52a3e82d" path="/var/lib/kubelet/pods/dc14b612-092a-43a4-affb-95ff52a3e82d/volumes" Dec 09 17:32:10 crc kubenswrapper[4853]: E1209 17:32:10.570806 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:32:15 crc kubenswrapper[4853]: E1209 17:32:15.570858 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:32:24 crc kubenswrapper[4853]: E1209 17:32:24.569500 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.467138 4853 scope.go:117] "RemoveContainer" containerID="259f33b62214f99fae0ce702fb940677f82dc0b688ae70dddb7c1601a28b05e1" Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.508880 4853 scope.go:117] "RemoveContainer" containerID="9cb9af83424083d069cbafddd9de10a27c4f49216ac13bba8f68717ac98f1864" Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.582649 4853 scope.go:117] "RemoveContainer" containerID="433a4b32e84f22a359dc5036ab9d4b35097553928d581fb72c2f102611449960" Dec 09 17:32:28 crc kubenswrapper[4853]: E1209 17:32:28.582668 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.593210 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.593290 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.593336 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.594414 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00554452bc356075032339a30493a8db8eb8765a75d762f1a48b0ef033e8dfa3"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.594483 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://00554452bc356075032339a30493a8db8eb8765a75d762f1a48b0ef033e8dfa3" gracePeriod=600 Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.630785 4853 scope.go:117] "RemoveContainer" containerID="227b4b6a730d1a894ab140ef4bdfde2d72a2ff5877df8e505dec50e56ddec151" Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.690131 4853 scope.go:117] "RemoveContainer" containerID="b93c58cded713c656901ad54e28c82d3aa5cc6eb90d4df345fbff3615ea5811b" Dec 09 17:32:28 crc kubenswrapper[4853]: I1209 17:32:28.799118 4853 scope.go:117] "RemoveContainer" containerID="4f3b63efcf880b14320b4bc145b837f78a8af0e11267d0bc40347a20e9fd0111" Dec 09 17:32:29 crc kubenswrapper[4853]: I1209 17:32:29.286238 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="00554452bc356075032339a30493a8db8eb8765a75d762f1a48b0ef033e8dfa3" exitCode=0 Dec 09 17:32:29 crc kubenswrapper[4853]: I1209 17:32:29.286671 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"00554452bc356075032339a30493a8db8eb8765a75d762f1a48b0ef033e8dfa3"} Dec 09 17:32:29 crc kubenswrapper[4853]: I1209 17:32:29.286702 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e"} Dec 09 17:32:29 crc kubenswrapper[4853]: I1209 17:32:29.286735 4853 scope.go:117] "RemoveContainer" containerID="3d353335948925706a799e5ea7a2550f2496a5e450c9d59eaf59cc728c6d2740" Dec 09 17:32:38 crc kubenswrapper[4853]: I1209 17:32:38.055271 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mxrk"] Dec 09 17:32:38 crc kubenswrapper[4853]: I1209 17:32:38.071393 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mxrk"] Dec 09 17:32:39 crc kubenswrapper[4853]: E1209 17:32:39.568959 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:32:39 crc kubenswrapper[4853]: I1209 17:32:39.579271 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb169fd1-98da-414e-9487-67a58e01f0a6" path="/var/lib/kubelet/pods/fb169fd1-98da-414e-9487-67a58e01f0a6/volumes" Dec 09 17:32:42 crc kubenswrapper[4853]: E1209 17:32:42.571163 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:32:51 crc kubenswrapper[4853]: I1209 17:32:51.058265 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-8e7b-account-create-update-qmpxf"] Dec 09 17:32:51 crc kubenswrapper[4853]: I1209 17:32:51.072258 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-9klzn"] Dec 09 17:32:51 crc kubenswrapper[4853]: I1209 17:32:51.084712 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-8e7b-account-create-update-qmpxf"] Dec 09 17:32:51 crc kubenswrapper[4853]: I1209 17:32:51.096948 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-9klzn"] Dec 09 17:32:51 crc kubenswrapper[4853]: E1209 17:32:51.572370 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:32:51 crc kubenswrapper[4853]: I1209 17:32:51.589165 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016225c8-a52d-43fd-b0f9-b1df2c77a52b" path="/var/lib/kubelet/pods/016225c8-a52d-43fd-b0f9-b1df2c77a52b/volumes" Dec 09 17:32:51 crc kubenswrapper[4853]: I1209 17:32:51.590125 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878" path="/var/lib/kubelet/pods/2bfc70ec-4a94-4dbf-8f32-9ed35f7c9878/volumes" Dec 09 17:32:56 crc kubenswrapper[4853]: E1209 17:32:56.569856 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:33:04 crc kubenswrapper[4853]: I1209 17:33:04.034382 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-brt9g"] Dec 09 17:33:04 crc kubenswrapper[4853]: I1209 17:33:04.046138 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-brt9g"] Dec 09 17:33:04 crc kubenswrapper[4853]: E1209 17:33:04.569521 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:33:05 crc kubenswrapper[4853]: I1209 17:33:05.030871 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4szt4"] Dec 09 17:33:05 crc kubenswrapper[4853]: I1209 17:33:05.042183 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4szt4"] Dec 09 17:33:05 crc kubenswrapper[4853]: I1209 17:33:05.584920 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbf8680-15b3-40ea-aed2-16f33ed9c8fe" path="/var/lib/kubelet/pods/6fbf8680-15b3-40ea-aed2-16f33ed9c8fe/volumes" Dec 09 17:33:05 crc kubenswrapper[4853]: I1209 17:33:05.586713 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982d513f-97ab-460e-b032-f639b8ef6ff5" path="/var/lib/kubelet/pods/982d513f-97ab-460e-b032-f639b8ef6ff5/volumes" Dec 09 17:33:08 crc kubenswrapper[4853]: E1209 17:33:08.571053 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:33:09 crc kubenswrapper[4853]: I1209 17:33:09.079887 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-4x7tx"] Dec 09 17:33:09 crc kubenswrapper[4853]: I1209 17:33:09.094398 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-4x7tx"] Dec 09 17:33:09 crc kubenswrapper[4853]: I1209 17:33:09.581862 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f1f371-2f71-401a-8c20-d400f873f3d1" path="/var/lib/kubelet/pods/23f1f371-2f71-401a-8c20-d400f873f3d1/volumes" Dec 09 17:33:16 crc kubenswrapper[4853]: E1209 17:33:16.570177 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:33:21 crc kubenswrapper[4853]: E1209 17:33:21.572362 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:33:28 crc kubenswrapper[4853]: I1209 17:33:28.997207 4853 scope.go:117] "RemoveContainer" containerID="584e4925a30fb324ef0e31c669550cb6caec9ea064608ec5422e24ccda24b9ec" Dec 09 17:33:29 crc kubenswrapper[4853]: I1209 17:33:29.050214 4853 scope.go:117] "RemoveContainer" containerID="412a7c4654232b8b2c283e9d343a15424b43981693e4cdad5eb6e3ae524fa7c3" Dec 09 17:33:29 crc kubenswrapper[4853]: I1209 17:33:29.143745 4853 scope.go:117] "RemoveContainer" containerID="ffd0c4af94a5fbda481a7f28c3eb73f548a5d8a1e25fdec199b27d450e87401c" Dec 09 17:33:29 crc kubenswrapper[4853]: I1209 17:33:29.171655 4853 scope.go:117] "RemoveContainer" containerID="7725319fd781eba583befbc4b11e0edf73401339a32c0a8e8a07d532719c2593" Dec 09 17:33:29 crc kubenswrapper[4853]: I1209 17:33:29.223180 4853 scope.go:117] "RemoveContainer" containerID="624e36bb2913de090322018d8e37f756c7a1bac84a7a75b3e8baecebc88a9fae" Dec 09 17:33:29 crc kubenswrapper[4853]: I1209 17:33:29.274658 4853 scope.go:117] "RemoveContainer" containerID="dc6df680aecde7d4948282ed3194f5671494ec2d190ca83a33ab3135a99e7816" Dec 09 17:33:29 crc kubenswrapper[4853]: E1209 17:33:29.568300 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:33:36 crc kubenswrapper[4853]: E1209 17:33:36.570118 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:33:43 crc kubenswrapper[4853]: E1209 17:33:43.584587 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:33:49 crc kubenswrapper[4853]: I1209 17:33:49.062384 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-n25td"] Dec 09 17:33:49 crc kubenswrapper[4853]: I1209 17:33:49.073988 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-n25td"] Dec 09 17:33:49 crc kubenswrapper[4853]: I1209 17:33:49.580151 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638b7c6f-30d1-4d07-8532-51736f5e74c5" path="/var/lib/kubelet/pods/638b7c6f-30d1-4d07-8532-51736f5e74c5/volumes" Dec 09 17:33:50 crc kubenswrapper[4853]: E1209 17:33:50.569624 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:33:55 crc kubenswrapper[4853]: E1209 17:33:55.570234 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:34:01 crc kubenswrapper[4853]: E1209 17:34:01.571730 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:34:08 crc kubenswrapper[4853]: E1209 17:34:08.570768 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:34:13 crc kubenswrapper[4853]: E1209 17:34:13.581952 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:34:20 crc kubenswrapper[4853]: E1209 17:34:20.570313 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:34:25 crc kubenswrapper[4853]: E1209 17:34:25.569445 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:34:28 crc kubenswrapper[4853]: I1209 17:34:28.593345 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:34:28 crc kubenswrapper[4853]: I1209 17:34:28.593733 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:34:29 crc kubenswrapper[4853]: I1209 17:34:29.449588 4853 scope.go:117] "RemoveContainer" containerID="e73549315cd7e4c1ce3b6077db0953299515c9c4d0caa7a8fdf6cecd506431d9" Dec 09 17:34:34 crc kubenswrapper[4853]: E1209 17:34:34.570197 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:34:36 crc kubenswrapper[4853]: E1209 17:34:36.569032 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:34:48 crc kubenswrapper[4853]: E1209 17:34:48.569701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:34:49 crc kubenswrapper[4853]: E1209 17:34:49.571615 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:34:58 crc kubenswrapper[4853]: I1209 17:34:58.593257 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:34:58 crc kubenswrapper[4853]: I1209 17:34:58.594005 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:34:59 crc kubenswrapper[4853]: E1209 17:34:59.569975 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:35:01 crc kubenswrapper[4853]: E1209 17:35:01.570898 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:35:12 crc kubenswrapper[4853]: E1209 17:35:12.569491 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:35:13 crc kubenswrapper[4853]: E1209 17:35:13.582801 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:35:24 crc kubenswrapper[4853]: E1209 17:35:24.570512 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:35:28 crc kubenswrapper[4853]: E1209 17:35:28.569637 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:35:28 crc kubenswrapper[4853]: I1209 17:35:28.592784 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:35:28 crc kubenswrapper[4853]: I1209 17:35:28.592839 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:35:28 crc kubenswrapper[4853]: I1209 17:35:28.592885 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:35:28 crc kubenswrapper[4853]: I1209 17:35:28.593902 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:35:28 crc kubenswrapper[4853]: I1209 17:35:28.593953 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" gracePeriod=600 Dec 09 17:35:28 crc kubenswrapper[4853]: E1209 17:35:28.716442 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:35:29 crc kubenswrapper[4853]: I1209 17:35:29.608517 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" exitCode=0 Dec 09 17:35:29 crc kubenswrapper[4853]: I1209 17:35:29.608653 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e"} Dec 09 17:35:29 crc kubenswrapper[4853]: I1209 17:35:29.608969 4853 scope.go:117] "RemoveContainer" containerID="00554452bc356075032339a30493a8db8eb8765a75d762f1a48b0ef033e8dfa3" Dec 09 17:35:29 crc kubenswrapper[4853]: I1209 17:35:29.610025 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:35:29 crc kubenswrapper[4853]: E1209 17:35:29.610587 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:35:38 crc kubenswrapper[4853]: E1209 17:35:38.570837 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:35:43 crc kubenswrapper[4853]: E1209 17:35:43.591415 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:35:44 crc kubenswrapper[4853]: I1209 17:35:44.567520 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:35:44 crc kubenswrapper[4853]: E1209 17:35:44.567860 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:35:49 crc kubenswrapper[4853]: E1209 17:35:49.568867 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:35:57 crc kubenswrapper[4853]: I1209 17:35:57.567871 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:35:57 crc kubenswrapper[4853]: E1209 17:35:57.568780 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:35:57 crc kubenswrapper[4853]: I1209 17:35:57.569428 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:35:57 crc kubenswrapper[4853]: E1209 17:35:57.691299 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:35:57 crc kubenswrapper[4853]: E1209 17:35:57.691356 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:35:57 crc kubenswrapper[4853]: E1209 17:35:57.691473 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:35:57 crc kubenswrapper[4853]: E1209 17:35:57.692892 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:36:02 crc kubenswrapper[4853]: E1209 17:36:02.695187 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:36:02 crc kubenswrapper[4853]: E1209 17:36:02.695738 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:36:02 crc kubenswrapper[4853]: E1209 17:36:02.695864 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:36:02 crc kubenswrapper[4853]: E1209 17:36:02.697082 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:36:10 crc kubenswrapper[4853]: I1209 17:36:10.567834 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:36:10 crc kubenswrapper[4853]: E1209 17:36:10.568829 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:36:10 crc kubenswrapper[4853]: E1209 17:36:10.570389 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:36:15 crc kubenswrapper[4853]: E1209 17:36:15.571020 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:36:22 crc kubenswrapper[4853]: I1209 17:36:22.568311 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:36:22 crc kubenswrapper[4853]: E1209 17:36:22.570913 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:36:22 crc kubenswrapper[4853]: E1209 17:36:22.574400 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:36:27 crc kubenswrapper[4853]: E1209 17:36:27.570156 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:36:29 crc kubenswrapper[4853]: I1209 17:36:29.540131 4853 scope.go:117] "RemoveContainer" containerID="3b3e890bf884c91832d185f6816acc35966d15f9f431285e5e77dac2d0c1218e" Dec 09 17:36:29 crc kubenswrapper[4853]: I1209 17:36:29.579196 4853 scope.go:117] "RemoveContainer" containerID="65fdfe43a925abc711646dac4ade7b03d35290dcd36db240f9ce0d7f79e7419a" Dec 09 17:36:29 crc kubenswrapper[4853]: I1209 17:36:29.621236 4853 scope.go:117] "RemoveContainer" containerID="241e42d112c216dd71f83095557ae3af0d645b64ab6d1e7699f992f2345168e2" Dec 09 17:36:35 crc kubenswrapper[4853]: E1209 17:36:35.571380 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:36:36 crc kubenswrapper[4853]: I1209 17:36:36.568129 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:36:36 crc kubenswrapper[4853]: E1209 17:36:36.568459 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:36:40 crc kubenswrapper[4853]: E1209 17:36:40.570192 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:36:47 crc kubenswrapper[4853]: I1209 17:36:47.567025 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:36:47 crc kubenswrapper[4853]: E1209 17:36:47.567905 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:36:49 crc kubenswrapper[4853]: E1209 17:36:49.570842 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:36:51 crc kubenswrapper[4853]: E1209 17:36:51.569208 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:36:58 crc kubenswrapper[4853]: I1209 17:36:58.568293 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:36:58 crc kubenswrapper[4853]: E1209 17:36:58.569283 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:37:02 crc kubenswrapper[4853]: E1209 17:37:02.570117 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:37:06 crc kubenswrapper[4853]: E1209 17:37:06.569171 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:37:11 crc kubenswrapper[4853]: I1209 17:37:11.570586 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:37:11 crc kubenswrapper[4853]: E1209 17:37:11.571679 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:37:15 crc kubenswrapper[4853]: I1209 17:37:15.902216 4853 generic.go:334] "Generic (PLEG): container finished" podID="7796c327-5952-4b15-a864-511d8f1c75d6" containerID="884d494bc9308378ef6264b4bbd860dbf1e3c20d9ace47493ff2383830e07605" exitCode=2 Dec 09 17:37:15 crc kubenswrapper[4853]: I1209 17:37:15.902294 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" event={"ID":"7796c327-5952-4b15-a864-511d8f1c75d6","Type":"ContainerDied","Data":"884d494bc9308378ef6264b4bbd860dbf1e3c20d9ace47493ff2383830e07605"} Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.419677 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.531863 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-ssh-key\") pod \"7796c327-5952-4b15-a864-511d8f1c75d6\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.532123 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mchj\" (UniqueName: \"kubernetes.io/projected/7796c327-5952-4b15-a864-511d8f1c75d6-kube-api-access-8mchj\") pod \"7796c327-5952-4b15-a864-511d8f1c75d6\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.532486 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-inventory\") pod \"7796c327-5952-4b15-a864-511d8f1c75d6\" (UID: \"7796c327-5952-4b15-a864-511d8f1c75d6\") " Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.538002 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7796c327-5952-4b15-a864-511d8f1c75d6-kube-api-access-8mchj" (OuterVolumeSpecName: "kube-api-access-8mchj") pod "7796c327-5952-4b15-a864-511d8f1c75d6" (UID: "7796c327-5952-4b15-a864-511d8f1c75d6"). InnerVolumeSpecName "kube-api-access-8mchj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.567873 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-inventory" (OuterVolumeSpecName: "inventory") pod "7796c327-5952-4b15-a864-511d8f1c75d6" (UID: "7796c327-5952-4b15-a864-511d8f1c75d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:37:17 crc kubenswrapper[4853]: E1209 17:37:17.571103 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.589637 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7796c327-5952-4b15-a864-511d8f1c75d6" (UID: "7796c327-5952-4b15-a864-511d8f1c75d6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.636268 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.636527 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mchj\" (UniqueName: \"kubernetes.io/projected/7796c327-5952-4b15-a864-511d8f1c75d6-kube-api-access-8mchj\") on node \"crc\" DevicePath \"\"" Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.636683 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7796c327-5952-4b15-a864-511d8f1c75d6-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.925356 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" event={"ID":"7796c327-5952-4b15-a864-511d8f1c75d6","Type":"ContainerDied","Data":"0ec753d923add03c2953ba55f3a0ffa35ecb65f6f05fa1122243f965d04780c8"} Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.925401 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec753d923add03c2953ba55f3a0ffa35ecb65f6f05fa1122243f965d04780c8" Dec 09 17:37:17 crc kubenswrapper[4853]: I1209 17:37:17.925758 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw" Dec 09 17:37:20 crc kubenswrapper[4853]: E1209 17:37:20.569815 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.035780 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf"] Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.036977 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerName="registry-server" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.036996 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerName="registry-server" Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.037012 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerName="extract-utilities" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037020 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerName="extract-utilities" Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.037030 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7796c327-5952-4b15-a864-511d8f1c75d6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037038 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7796c327-5952-4b15-a864-511d8f1c75d6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.037066 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerName="extract-content" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037073 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerName="extract-content" Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.037096 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23239832-a253-4323-8567-714a358448b5" containerName="registry-server" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037103 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="23239832-a253-4323-8567-714a358448b5" containerName="registry-server" Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.037125 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerName="extract-utilities" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037133 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerName="extract-utilities" Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.037162 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23239832-a253-4323-8567-714a358448b5" containerName="extract-utilities" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037169 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="23239832-a253-4323-8567-714a358448b5" containerName="extract-utilities" Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.037182 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23239832-a253-4323-8567-714a358448b5" containerName="extract-content" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037189 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="23239832-a253-4323-8567-714a358448b5" containerName="extract-content" Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.037198 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerName="extract-content" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037205 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerName="extract-content" Dec 09 17:37:25 crc kubenswrapper[4853]: E1209 17:37:25.037216 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerName="registry-server" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037223 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerName="registry-server" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037490 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="424cf22a-a62a-49aa-853a-73b5ae86acb7" containerName="registry-server" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037511 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="23239832-a253-4323-8567-714a358448b5" containerName="registry-server" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037521 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="98958ce8-eace-467c-b9d9-8e2bbb5041d9" containerName="registry-server" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.037534 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7796c327-5952-4b15-a864-511d8f1c75d6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.038904 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.043517 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.043788 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.043810 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.045166 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.103009 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf"] Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.121109 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.121280 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkprb\" (UniqueName: \"kubernetes.io/projected/0f19467b-be6d-4600-8e1e-4bcb5627e44f-kube-api-access-qkprb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.121315 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.224316 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkprb\" (UniqueName: \"kubernetes.io/projected/0f19467b-be6d-4600-8e1e-4bcb5627e44f-kube-api-access-qkprb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.224394 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.224576 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.240467 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.242465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.243369 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkprb\" (UniqueName: \"kubernetes.io/projected/0f19467b-be6d-4600-8e1e-4bcb5627e44f-kube-api-access-qkprb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:25 crc kubenswrapper[4853]: I1209 17:37:25.365389 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:37:26 crc kubenswrapper[4853]: I1209 17:37:26.001332 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf"] Dec 09 17:37:26 crc kubenswrapper[4853]: I1209 17:37:26.568458 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:37:26 crc kubenswrapper[4853]: E1209 17:37:26.568837 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:37:27 crc kubenswrapper[4853]: I1209 17:37:27.039477 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" event={"ID":"0f19467b-be6d-4600-8e1e-4bcb5627e44f","Type":"ContainerStarted","Data":"591c2a1c9cc5bc25ee3b8b245a3dbe090fec71a2f02f4cea1acf1f75c92df242"} Dec 09 17:37:27 crc kubenswrapper[4853]: I1209 17:37:27.039821 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" event={"ID":"0f19467b-be6d-4600-8e1e-4bcb5627e44f","Type":"ContainerStarted","Data":"ba6f9cf8f82f724fbb4a03e397fd502f0a3c1bbd82ab747ab68b0f462ab6e5b2"} Dec 09 17:37:27 crc kubenswrapper[4853]: I1209 17:37:27.064700 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" podStartSLOduration=1.539064147 podStartE2EDuration="2.06467813s" podCreationTimestamp="2025-12-09 17:37:25 +0000 UTC" firstStartedPulling="2025-12-09 17:37:26.015077909 +0000 UTC m=+2472.949817101" lastFinishedPulling="2025-12-09 17:37:26.540691902 +0000 UTC m=+2473.475431084" observedRunningTime="2025-12-09 17:37:27.053036159 +0000 UTC m=+2473.987775361" watchObservedRunningTime="2025-12-09 17:37:27.06467813 +0000 UTC m=+2473.999417312" Dec 09 17:37:28 crc kubenswrapper[4853]: E1209 17:37:28.569088 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:37:33 crc kubenswrapper[4853]: E1209 17:37:33.589144 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:37:40 crc kubenswrapper[4853]: I1209 17:37:40.567749 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:37:40 crc kubenswrapper[4853]: E1209 17:37:40.568909 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:37:43 crc kubenswrapper[4853]: E1209 17:37:43.578020 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:37:46 crc kubenswrapper[4853]: E1209 17:37:46.571541 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:37:55 crc kubenswrapper[4853]: I1209 17:37:55.567346 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:37:55 crc kubenswrapper[4853]: E1209 17:37:55.568440 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:37:58 crc kubenswrapper[4853]: E1209 17:37:58.569203 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:38:00 crc kubenswrapper[4853]: E1209 17:38:00.570360 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:38:10 crc kubenswrapper[4853]: I1209 17:38:10.567469 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:38:10 crc kubenswrapper[4853]: E1209 17:38:10.568424 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:38:10 crc kubenswrapper[4853]: E1209 17:38:10.569805 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:38:11 crc kubenswrapper[4853]: E1209 17:38:11.569174 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:38:23 crc kubenswrapper[4853]: E1209 17:38:23.582289 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:38:25 crc kubenswrapper[4853]: I1209 17:38:25.568364 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:38:25 crc kubenswrapper[4853]: E1209 17:38:25.569103 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:38:26 crc kubenswrapper[4853]: E1209 17:38:26.570120 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:38:35 crc kubenswrapper[4853]: E1209 17:38:35.569667 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:38:37 crc kubenswrapper[4853]: I1209 17:38:37.568118 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:38:37 crc kubenswrapper[4853]: E1209 17:38:37.568695 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:38:39 crc kubenswrapper[4853]: E1209 17:38:39.569076 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:38:46 crc kubenswrapper[4853]: E1209 17:38:46.570329 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:38:50 crc kubenswrapper[4853]: I1209 17:38:50.567703 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:38:50 crc kubenswrapper[4853]: E1209 17:38:50.568794 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:38:53 crc kubenswrapper[4853]: E1209 17:38:53.577353 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:39:01 crc kubenswrapper[4853]: E1209 17:39:01.569048 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:39:03 crc kubenswrapper[4853]: I1209 17:39:03.579497 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:39:03 crc kubenswrapper[4853]: E1209 17:39:03.580225 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:39:06 crc kubenswrapper[4853]: E1209 17:39:06.570141 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:39:16 crc kubenswrapper[4853]: E1209 17:39:16.570303 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:39:18 crc kubenswrapper[4853]: I1209 17:39:18.567914 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:39:18 crc kubenswrapper[4853]: E1209 17:39:18.570451 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:39:20 crc kubenswrapper[4853]: E1209 17:39:20.569750 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:39:30 crc kubenswrapper[4853]: E1209 17:39:30.571096 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:39:32 crc kubenswrapper[4853]: I1209 17:39:32.567341 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:39:32 crc kubenswrapper[4853]: E1209 17:39:32.567874 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:39:34 crc kubenswrapper[4853]: E1209 17:39:34.571269 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:39:41 crc kubenswrapper[4853]: I1209 17:39:41.900425 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d9x7b"] Dec 09 17:39:41 crc kubenswrapper[4853]: I1209 17:39:41.906659 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:41 crc kubenswrapper[4853]: I1209 17:39:41.915437 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9x7b"] Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.063004 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbtd\" (UniqueName: \"kubernetes.io/projected/dc147e14-e291-4337-bc57-17aee59cb337-kube-api-access-6rbtd\") pod \"redhat-operators-d9x7b\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.063084 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-utilities\") pod \"redhat-operators-d9x7b\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.063288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-catalog-content\") pod \"redhat-operators-d9x7b\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.165746 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rbtd\" (UniqueName: \"kubernetes.io/projected/dc147e14-e291-4337-bc57-17aee59cb337-kube-api-access-6rbtd\") pod \"redhat-operators-d9x7b\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.165837 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-utilities\") pod \"redhat-operators-d9x7b\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.165902 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-catalog-content\") pod \"redhat-operators-d9x7b\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.166437 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-utilities\") pod \"redhat-operators-d9x7b\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.166524 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-catalog-content\") pod \"redhat-operators-d9x7b\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.187715 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rbtd\" (UniqueName: \"kubernetes.io/projected/dc147e14-e291-4337-bc57-17aee59cb337-kube-api-access-6rbtd\") pod \"redhat-operators-d9x7b\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.245160 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:42 crc kubenswrapper[4853]: E1209 17:39:42.572270 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:39:42 crc kubenswrapper[4853]: I1209 17:39:42.764469 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9x7b"] Dec 09 17:39:42 crc kubenswrapper[4853]: W1209 17:39:42.773426 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc147e14_e291_4337_bc57_17aee59cb337.slice/crio-5115fb1f795fed9931d8f714acd34c7077be32b400f60d9390a262f1325a2df1 WatchSource:0}: Error finding container 5115fb1f795fed9931d8f714acd34c7077be32b400f60d9390a262f1325a2df1: Status 404 returned error can't find the container with id 5115fb1f795fed9931d8f714acd34c7077be32b400f60d9390a262f1325a2df1 Dec 09 17:39:43 crc kubenswrapper[4853]: I1209 17:39:43.689307 4853 generic.go:334] "Generic (PLEG): container finished" podID="dc147e14-e291-4337-bc57-17aee59cb337" containerID="f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef" exitCode=0 Dec 09 17:39:43 crc kubenswrapper[4853]: I1209 17:39:43.689433 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9x7b" event={"ID":"dc147e14-e291-4337-bc57-17aee59cb337","Type":"ContainerDied","Data":"f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef"} Dec 09 17:39:43 crc kubenswrapper[4853]: I1209 17:39:43.689836 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9x7b" event={"ID":"dc147e14-e291-4337-bc57-17aee59cb337","Type":"ContainerStarted","Data":"5115fb1f795fed9931d8f714acd34c7077be32b400f60d9390a262f1325a2df1"} Dec 09 17:39:44 crc kubenswrapper[4853]: I1209 17:39:44.702111 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9x7b" event={"ID":"dc147e14-e291-4337-bc57-17aee59cb337","Type":"ContainerStarted","Data":"01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45"} Dec 09 17:39:45 crc kubenswrapper[4853]: E1209 17:39:45.572191 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:39:46 crc kubenswrapper[4853]: I1209 17:39:46.567844 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:39:46 crc kubenswrapper[4853]: E1209 17:39:46.568538 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:39:48 crc kubenswrapper[4853]: I1209 17:39:48.752239 4853 generic.go:334] "Generic (PLEG): container finished" podID="dc147e14-e291-4337-bc57-17aee59cb337" containerID="01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45" exitCode=0 Dec 09 17:39:48 crc kubenswrapper[4853]: I1209 17:39:48.752310 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9x7b" event={"ID":"dc147e14-e291-4337-bc57-17aee59cb337","Type":"ContainerDied","Data":"01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45"} Dec 09 17:39:49 crc kubenswrapper[4853]: I1209 17:39:49.767807 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9x7b" event={"ID":"dc147e14-e291-4337-bc57-17aee59cb337","Type":"ContainerStarted","Data":"97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51"} Dec 09 17:39:49 crc kubenswrapper[4853]: I1209 17:39:49.805701 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d9x7b" podStartSLOduration=3.260835316 podStartE2EDuration="8.8055651s" podCreationTimestamp="2025-12-09 17:39:41 +0000 UTC" firstStartedPulling="2025-12-09 17:39:43.691340695 +0000 UTC m=+2610.626079887" lastFinishedPulling="2025-12-09 17:39:49.236070489 +0000 UTC m=+2616.170809671" observedRunningTime="2025-12-09 17:39:49.787581185 +0000 UTC m=+2616.722320417" watchObservedRunningTime="2025-12-09 17:39:49.8055651 +0000 UTC m=+2616.740304332" Dec 09 17:39:52 crc kubenswrapper[4853]: I1209 17:39:52.245709 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:52 crc kubenswrapper[4853]: I1209 17:39:52.246051 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:39:53 crc kubenswrapper[4853]: I1209 17:39:53.320921 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d9x7b" podUID="dc147e14-e291-4337-bc57-17aee59cb337" containerName="registry-server" probeResult="failure" output=< Dec 09 17:39:53 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 17:39:53 crc kubenswrapper[4853]: > Dec 09 17:39:56 crc kubenswrapper[4853]: E1209 17:39:56.568755 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:39:57 crc kubenswrapper[4853]: E1209 17:39:57.570658 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:40:00 crc kubenswrapper[4853]: I1209 17:40:00.567439 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:40:00 crc kubenswrapper[4853]: E1209 17:40:00.568342 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:40:02 crc kubenswrapper[4853]: I1209 17:40:02.304030 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:40:02 crc kubenswrapper[4853]: I1209 17:40:02.362420 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:40:02 crc kubenswrapper[4853]: I1209 17:40:02.543369 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9x7b"] Dec 09 17:40:03 crc kubenswrapper[4853]: I1209 17:40:03.938881 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d9x7b" podUID="dc147e14-e291-4337-bc57-17aee59cb337" containerName="registry-server" containerID="cri-o://97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51" gracePeriod=2 Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.447202 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.552768 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-catalog-content\") pod \"dc147e14-e291-4337-bc57-17aee59cb337\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.552965 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rbtd\" (UniqueName: \"kubernetes.io/projected/dc147e14-e291-4337-bc57-17aee59cb337-kube-api-access-6rbtd\") pod \"dc147e14-e291-4337-bc57-17aee59cb337\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.553018 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-utilities\") pod \"dc147e14-e291-4337-bc57-17aee59cb337\" (UID: \"dc147e14-e291-4337-bc57-17aee59cb337\") " Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.553747 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-utilities" (OuterVolumeSpecName: "utilities") pod "dc147e14-e291-4337-bc57-17aee59cb337" (UID: "dc147e14-e291-4337-bc57-17aee59cb337"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.561037 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc147e14-e291-4337-bc57-17aee59cb337-kube-api-access-6rbtd" (OuterVolumeSpecName: "kube-api-access-6rbtd") pod "dc147e14-e291-4337-bc57-17aee59cb337" (UID: "dc147e14-e291-4337-bc57-17aee59cb337"). InnerVolumeSpecName "kube-api-access-6rbtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.655710 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rbtd\" (UniqueName: \"kubernetes.io/projected/dc147e14-e291-4337-bc57-17aee59cb337-kube-api-access-6rbtd\") on node \"crc\" DevicePath \"\"" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.655762 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.674817 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc147e14-e291-4337-bc57-17aee59cb337" (UID: "dc147e14-e291-4337-bc57-17aee59cb337"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.757830 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc147e14-e291-4337-bc57-17aee59cb337-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.949693 4853 generic.go:334] "Generic (PLEG): container finished" podID="dc147e14-e291-4337-bc57-17aee59cb337" containerID="97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51" exitCode=0 Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.949734 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9x7b" event={"ID":"dc147e14-e291-4337-bc57-17aee59cb337","Type":"ContainerDied","Data":"97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51"} Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.949817 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9x7b" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.950080 4853 scope.go:117] "RemoveContainer" containerID="97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.950064 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9x7b" event={"ID":"dc147e14-e291-4337-bc57-17aee59cb337","Type":"ContainerDied","Data":"5115fb1f795fed9931d8f714acd34c7077be32b400f60d9390a262f1325a2df1"} Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.970269 4853 scope.go:117] "RemoveContainer" containerID="01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45" Dec 09 17:40:04 crc kubenswrapper[4853]: I1209 17:40:04.993549 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9x7b"] Dec 09 17:40:05 crc kubenswrapper[4853]: I1209 17:40:05.016614 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d9x7b"] Dec 09 17:40:05 crc kubenswrapper[4853]: I1209 17:40:05.048879 4853 scope.go:117] "RemoveContainer" containerID="f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef" Dec 09 17:40:05 crc kubenswrapper[4853]: I1209 17:40:05.084753 4853 scope.go:117] "RemoveContainer" containerID="97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51" Dec 09 17:40:05 crc kubenswrapper[4853]: E1209 17:40:05.085257 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51\": container with ID starting with 97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51 not found: ID does not exist" containerID="97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51" Dec 09 17:40:05 crc kubenswrapper[4853]: I1209 17:40:05.085312 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51"} err="failed to get container status \"97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51\": rpc error: code = NotFound desc = could not find container \"97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51\": container with ID starting with 97a0d36e8b55f8fd0b98ff9bbb8f32a9c50be4d66d87dd8da00d62201b61ae51 not found: ID does not exist" Dec 09 17:40:05 crc kubenswrapper[4853]: I1209 17:40:05.085344 4853 scope.go:117] "RemoveContainer" containerID="01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45" Dec 09 17:40:05 crc kubenswrapper[4853]: E1209 17:40:05.086017 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45\": container with ID starting with 01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45 not found: ID does not exist" containerID="01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45" Dec 09 17:40:05 crc kubenswrapper[4853]: I1209 17:40:05.086073 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45"} err="failed to get container status \"01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45\": rpc error: code = NotFound desc = could not find container \"01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45\": container with ID starting with 01a74c97f1e446e9bde721fb4eaf8dcd04700a2db1c51ebcd37ca2653acaeb45 not found: ID does not exist" Dec 09 17:40:05 crc kubenswrapper[4853]: I1209 17:40:05.086116 4853 scope.go:117] "RemoveContainer" containerID="f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef" Dec 09 17:40:05 crc kubenswrapper[4853]: E1209 17:40:05.086375 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef\": container with ID starting with f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef not found: ID does not exist" containerID="f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef" Dec 09 17:40:05 crc kubenswrapper[4853]: I1209 17:40:05.086427 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef"} err="failed to get container status \"f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef\": rpc error: code = NotFound desc = could not find container \"f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef\": container with ID starting with f890f02d6c4c95e3d7d484fb9b82f4e1d8bc37c1f79059275941f0a5e13c59ef not found: ID does not exist" Dec 09 17:40:05 crc kubenswrapper[4853]: I1209 17:40:05.579935 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc147e14-e291-4337-bc57-17aee59cb337" path="/var/lib/kubelet/pods/dc147e14-e291-4337-bc57-17aee59cb337/volumes" Dec 09 17:40:08 crc kubenswrapper[4853]: E1209 17:40:08.570421 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:40:09 crc kubenswrapper[4853]: E1209 17:40:09.568465 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:40:11 crc kubenswrapper[4853]: I1209 17:40:11.569144 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:40:11 crc kubenswrapper[4853]: E1209 17:40:11.569938 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:40:20 crc kubenswrapper[4853]: E1209 17:40:20.570845 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:40:23 crc kubenswrapper[4853]: E1209 17:40:23.580927 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:40:24 crc kubenswrapper[4853]: I1209 17:40:24.567898 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:40:24 crc kubenswrapper[4853]: E1209 17:40:24.568346 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:40:33 crc kubenswrapper[4853]: E1209 17:40:33.591235 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:40:36 crc kubenswrapper[4853]: I1209 17:40:36.567260 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:40:36 crc kubenswrapper[4853]: E1209 17:40:36.569753 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:40:37 crc kubenswrapper[4853]: I1209 17:40:37.333135 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"2d549d2183bb087074c6104339faa2cc893dc6d092dd3ba1b2b8b10fbb7b84db"} Dec 09 17:40:46 crc kubenswrapper[4853]: E1209 17:40:46.570958 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:40:49 crc kubenswrapper[4853]: E1209 17:40:49.569842 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:40:59 crc kubenswrapper[4853]: I1209 17:40:59.910516 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fbsr4"] Dec 09 17:40:59 crc kubenswrapper[4853]: E1209 17:40:59.911411 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc147e14-e291-4337-bc57-17aee59cb337" containerName="registry-server" Dec 09 17:40:59 crc kubenswrapper[4853]: I1209 17:40:59.911424 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc147e14-e291-4337-bc57-17aee59cb337" containerName="registry-server" Dec 09 17:40:59 crc kubenswrapper[4853]: E1209 17:40:59.911437 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc147e14-e291-4337-bc57-17aee59cb337" containerName="extract-content" Dec 09 17:40:59 crc kubenswrapper[4853]: I1209 17:40:59.911443 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc147e14-e291-4337-bc57-17aee59cb337" containerName="extract-content" Dec 09 17:40:59 crc kubenswrapper[4853]: E1209 17:40:59.911457 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc147e14-e291-4337-bc57-17aee59cb337" containerName="extract-utilities" Dec 09 17:40:59 crc kubenswrapper[4853]: I1209 17:40:59.911465 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc147e14-e291-4337-bc57-17aee59cb337" containerName="extract-utilities" Dec 09 17:40:59 crc kubenswrapper[4853]: I1209 17:40:59.911880 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc147e14-e291-4337-bc57-17aee59cb337" containerName="registry-server" Dec 09 17:40:59 crc kubenswrapper[4853]: I1209 17:40:59.919930 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:40:59 crc kubenswrapper[4853]: I1209 17:40:59.969799 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fbsr4"] Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.000226 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmjrv\" (UniqueName: \"kubernetes.io/projected/51c51666-5298-4236-9fcd-c8e7734f592e-kube-api-access-rmjrv\") pod \"certified-operators-fbsr4\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.000428 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-catalog-content\") pod \"certified-operators-fbsr4\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.000845 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-utilities\") pod \"certified-operators-fbsr4\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.103042 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-catalog-content\") pod \"certified-operators-fbsr4\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.103172 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-utilities\") pod \"certified-operators-fbsr4\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.103251 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmjrv\" (UniqueName: \"kubernetes.io/projected/51c51666-5298-4236-9fcd-c8e7734f592e-kube-api-access-rmjrv\") pod \"certified-operators-fbsr4\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.103626 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-catalog-content\") pod \"certified-operators-fbsr4\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.103677 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-utilities\") pod \"certified-operators-fbsr4\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.127381 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmjrv\" (UniqueName: \"kubernetes.io/projected/51c51666-5298-4236-9fcd-c8e7734f592e-kube-api-access-rmjrv\") pod \"certified-operators-fbsr4\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.283235 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:00 crc kubenswrapper[4853]: E1209 17:41:00.569826 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:41:00 crc kubenswrapper[4853]: I1209 17:41:00.854822 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fbsr4"] Dec 09 17:41:00 crc kubenswrapper[4853]: W1209 17:41:00.858270 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1 WatchSource:0}: Error finding container 19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1: Status 404 returned error can't find the container with id 19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1 Dec 09 17:41:01 crc kubenswrapper[4853]: I1209 17:41:01.574196 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:41:01 crc kubenswrapper[4853]: I1209 17:41:01.656739 4853 generic.go:334] "Generic (PLEG): container finished" podID="51c51666-5298-4236-9fcd-c8e7734f592e" containerID="b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1" exitCode=0 Dec 09 17:41:01 crc kubenswrapper[4853]: I1209 17:41:01.656790 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbsr4" event={"ID":"51c51666-5298-4236-9fcd-c8e7734f592e","Type":"ContainerDied","Data":"b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1"} Dec 09 17:41:01 crc kubenswrapper[4853]: I1209 17:41:01.656821 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbsr4" event={"ID":"51c51666-5298-4236-9fcd-c8e7734f592e","Type":"ContainerStarted","Data":"19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1"} Dec 09 17:41:01 crc kubenswrapper[4853]: E1209 17:41:01.729938 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:41:01 crc kubenswrapper[4853]: E1209 17:41:01.730125 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:41:01 crc kubenswrapper[4853]: E1209 17:41:01.730321 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:41:01 crc kubenswrapper[4853]: E1209 17:41:01.732184 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:41:03 crc kubenswrapper[4853]: I1209 17:41:03.682057 4853 generic.go:334] "Generic (PLEG): container finished" podID="51c51666-5298-4236-9fcd-c8e7734f592e" containerID="e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9" exitCode=0 Dec 09 17:41:03 crc kubenswrapper[4853]: I1209 17:41:03.682118 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbsr4" event={"ID":"51c51666-5298-4236-9fcd-c8e7734f592e","Type":"ContainerDied","Data":"e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9"} Dec 09 17:41:04 crc kubenswrapper[4853]: I1209 17:41:04.696422 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbsr4" event={"ID":"51c51666-5298-4236-9fcd-c8e7734f592e","Type":"ContainerStarted","Data":"f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792"} Dec 09 17:41:04 crc kubenswrapper[4853]: I1209 17:41:04.722236 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fbsr4" podStartSLOduration=3.215300114 podStartE2EDuration="5.722218454s" podCreationTimestamp="2025-12-09 17:40:59 +0000 UTC" firstStartedPulling="2025-12-09 17:41:01.660493441 +0000 UTC m=+2688.595232613" lastFinishedPulling="2025-12-09 17:41:04.167411761 +0000 UTC m=+2691.102150953" observedRunningTime="2025-12-09 17:41:04.719479091 +0000 UTC m=+2691.654218283" watchObservedRunningTime="2025-12-09 17:41:04.722218454 +0000 UTC m=+2691.656957636" Dec 09 17:41:10 crc kubenswrapper[4853]: I1209 17:41:10.284874 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:10 crc kubenswrapper[4853]: I1209 17:41:10.286392 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:10 crc kubenswrapper[4853]: I1209 17:41:10.329321 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:10 crc kubenswrapper[4853]: I1209 17:41:10.814958 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:10 crc kubenswrapper[4853]: I1209 17:41:10.878272 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fbsr4"] Dec 09 17:41:11 crc kubenswrapper[4853]: E1209 17:41:11.669993 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:41:11 crc kubenswrapper[4853]: E1209 17:41:11.670291 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:41:11 crc kubenswrapper[4853]: E1209 17:41:11.670428 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:41:11 crc kubenswrapper[4853]: E1209 17:41:11.671732 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:41:12 crc kubenswrapper[4853]: I1209 17:41:12.790354 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fbsr4" podUID="51c51666-5298-4236-9fcd-c8e7734f592e" containerName="registry-server" containerID="cri-o://f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792" gracePeriod=2 Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.329851 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.441935 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmjrv\" (UniqueName: \"kubernetes.io/projected/51c51666-5298-4236-9fcd-c8e7734f592e-kube-api-access-rmjrv\") pod \"51c51666-5298-4236-9fcd-c8e7734f592e\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.442058 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-utilities\") pod \"51c51666-5298-4236-9fcd-c8e7734f592e\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.442231 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-catalog-content\") pod \"51c51666-5298-4236-9fcd-c8e7734f592e\" (UID: \"51c51666-5298-4236-9fcd-c8e7734f592e\") " Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.443361 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-utilities" (OuterVolumeSpecName: "utilities") pod "51c51666-5298-4236-9fcd-c8e7734f592e" (UID: "51c51666-5298-4236-9fcd-c8e7734f592e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.455511 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c51666-5298-4236-9fcd-c8e7734f592e-kube-api-access-rmjrv" (OuterVolumeSpecName: "kube-api-access-rmjrv") pod "51c51666-5298-4236-9fcd-c8e7734f592e" (UID: "51c51666-5298-4236-9fcd-c8e7734f592e"). InnerVolumeSpecName "kube-api-access-rmjrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.495953 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51c51666-5298-4236-9fcd-c8e7734f592e" (UID: "51c51666-5298-4236-9fcd-c8e7734f592e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.545614 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmjrv\" (UniqueName: \"kubernetes.io/projected/51c51666-5298-4236-9fcd-c8e7734f592e-kube-api-access-rmjrv\") on node \"crc\" DevicePath \"\"" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.545656 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.545668 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c51666-5298-4236-9fcd-c8e7734f592e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.801574 4853 generic.go:334] "Generic (PLEG): container finished" podID="51c51666-5298-4236-9fcd-c8e7734f592e" containerID="f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792" exitCode=0 Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.801637 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbsr4" event={"ID":"51c51666-5298-4236-9fcd-c8e7734f592e","Type":"ContainerDied","Data":"f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792"} Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.801661 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fbsr4" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.801687 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fbsr4" event={"ID":"51c51666-5298-4236-9fcd-c8e7734f592e","Type":"ContainerDied","Data":"19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1"} Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.801708 4853 scope.go:117] "RemoveContainer" containerID="f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.842791 4853 scope.go:117] "RemoveContainer" containerID="e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9" Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.847554 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fbsr4"] Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.873493 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fbsr4"] Dec 09 17:41:13 crc kubenswrapper[4853]: I1209 17:41:13.905282 4853 scope.go:117] "RemoveContainer" containerID="b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1" Dec 09 17:41:14 crc kubenswrapper[4853]: I1209 17:41:14.008976 4853 scope.go:117] "RemoveContainer" containerID="f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792" Dec 09 17:41:14 crc kubenswrapper[4853]: E1209 17:41:14.009319 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792\": container with ID starting with f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792 not found: ID does not exist" containerID="f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792" Dec 09 17:41:14 crc kubenswrapper[4853]: I1209 17:41:14.009346 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792"} err="failed to get container status \"f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792\": rpc error: code = NotFound desc = could not find container \"f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792\": container with ID starting with f75ea8761de8664593d13aadd74f53ecdf7ce1f1dd8def5bd9014aef07c53792 not found: ID does not exist" Dec 09 17:41:14 crc kubenswrapper[4853]: I1209 17:41:14.009367 4853 scope.go:117] "RemoveContainer" containerID="e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9" Dec 09 17:41:14 crc kubenswrapper[4853]: E1209 17:41:14.009608 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9\": container with ID starting with e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9 not found: ID does not exist" containerID="e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9" Dec 09 17:41:14 crc kubenswrapper[4853]: I1209 17:41:14.009631 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9"} err="failed to get container status \"e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9\": rpc error: code = NotFound desc = could not find container \"e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9\": container with ID starting with e8f041e0c06d9d67a44df7eca6ed0ca327b8de9eade3b1c0f3b727874634a5c9 not found: ID does not exist" Dec 09 17:41:14 crc kubenswrapper[4853]: I1209 17:41:14.009644 4853 scope.go:117] "RemoveContainer" containerID="b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1" Dec 09 17:41:14 crc kubenswrapper[4853]: E1209 17:41:14.009906 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1\": container with ID starting with b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1 not found: ID does not exist" containerID="b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1" Dec 09 17:41:14 crc kubenswrapper[4853]: I1209 17:41:14.009926 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1"} err="failed to get container status \"b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1\": rpc error: code = NotFound desc = could not find container \"b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1\": container with ID starting with b1c966b0085f07ab85004e695b06e292edcaca6083ec58837f88f3d9abfdedf1 not found: ID does not exist" Dec 09 17:41:14 crc kubenswrapper[4853]: E1209 17:41:14.071427 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache]" Dec 09 17:41:14 crc kubenswrapper[4853]: E1209 17:41:14.569511 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:41:15 crc kubenswrapper[4853]: I1209 17:41:15.581024 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c51666-5298-4236-9fcd-c8e7734f592e" path="/var/lib/kubelet/pods/51c51666-5298-4236-9fcd-c8e7734f592e/volumes" Dec 09 17:41:20 crc kubenswrapper[4853]: E1209 17:41:20.761266 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache]" Dec 09 17:41:22 crc kubenswrapper[4853]: E1209 17:41:22.571334 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:41:29 crc kubenswrapper[4853]: E1209 17:41:29.437056 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache]" Dec 09 17:41:29 crc kubenswrapper[4853]: E1209 17:41:29.568909 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:41:30 crc kubenswrapper[4853]: E1209 17:41:30.826816 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache]" Dec 09 17:41:36 crc kubenswrapper[4853]: E1209 17:41:36.570082 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:41:40 crc kubenswrapper[4853]: E1209 17:41:40.571091 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:41:41 crc kubenswrapper[4853]: E1209 17:41:41.154160 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache]" Dec 09 17:41:44 crc kubenswrapper[4853]: E1209 17:41:44.071305 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache]" Dec 09 17:41:47 crc kubenswrapper[4853]: E1209 17:41:47.572764 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:41:48 crc kubenswrapper[4853]: E1209 17:41:48.301624 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache]" Dec 09 17:41:48 crc kubenswrapper[4853]: E1209 17:41:48.302159 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache]" Dec 09 17:41:51 crc kubenswrapper[4853]: E1209 17:41:51.209358 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache]" Dec 09 17:41:51 crc kubenswrapper[4853]: E1209 17:41:51.569366 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:41:59 crc kubenswrapper[4853]: E1209 17:41:59.346950 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache]" Dec 09 17:42:00 crc kubenswrapper[4853]: E1209 17:42:00.570006 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:42:01 crc kubenswrapper[4853]: E1209 17:42:01.274932 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache]" Dec 09 17:42:05 crc kubenswrapper[4853]: E1209 17:42:05.569419 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:42:11 crc kubenswrapper[4853]: E1209 17:42:11.691581 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51c51666_5298_4236_9fcd_c8e7734f592e.slice/crio-19e68e876f48aa5019b29326955db81b213d906bf94c6fe505053db22399aab1\": RecentStats: unable to find data in memory cache]" Dec 09 17:42:13 crc kubenswrapper[4853]: E1209 17:42:13.628129 4853 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b4c171f1897249305184b02be781cc90c0413d3cfbc9bbe02a8a5f9455ce0156/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b4c171f1897249305184b02be781cc90c0413d3cfbc9bbe02a8a5f9455ce0156/diff: no such file or directory, extraDiskErr: Dec 09 17:42:14 crc kubenswrapper[4853]: E1209 17:42:14.571703 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:42:18 crc kubenswrapper[4853]: E1209 17:42:18.570039 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:42:26 crc kubenswrapper[4853]: E1209 17:42:26.569149 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:42:33 crc kubenswrapper[4853]: E1209 17:42:33.580980 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:42:39 crc kubenswrapper[4853]: E1209 17:42:39.569495 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:42:48 crc kubenswrapper[4853]: E1209 17:42:48.570819 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:42:53 crc kubenswrapper[4853]: E1209 17:42:53.578586 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:42:58 crc kubenswrapper[4853]: I1209 17:42:58.592974 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:42:58 crc kubenswrapper[4853]: I1209 17:42:58.593568 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:43:00 crc kubenswrapper[4853]: E1209 17:43:00.587324 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:43:04 crc kubenswrapper[4853]: E1209 17:43:04.568982 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:43:11 crc kubenswrapper[4853]: E1209 17:43:11.570282 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:43:17 crc kubenswrapper[4853]: E1209 17:43:17.572370 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:43:23 crc kubenswrapper[4853]: E1209 17:43:23.587480 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:43:28 crc kubenswrapper[4853]: E1209 17:43:28.571425 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:43:28 crc kubenswrapper[4853]: I1209 17:43:28.592846 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:43:28 crc kubenswrapper[4853]: I1209 17:43:28.592910 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:43:35 crc kubenswrapper[4853]: E1209 17:43:35.569611 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:43:40 crc kubenswrapper[4853]: E1209 17:43:40.571096 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:43:46 crc kubenswrapper[4853]: E1209 17:43:46.569312 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:43:46 crc kubenswrapper[4853]: I1209 17:43:46.641169 4853 generic.go:334] "Generic (PLEG): container finished" podID="0f19467b-be6d-4600-8e1e-4bcb5627e44f" containerID="591c2a1c9cc5bc25ee3b8b245a3dbe090fec71a2f02f4cea1acf1f75c92df242" exitCode=2 Dec 09 17:43:46 crc kubenswrapper[4853]: I1209 17:43:46.641229 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" event={"ID":"0f19467b-be6d-4600-8e1e-4bcb5627e44f","Type":"ContainerDied","Data":"591c2a1c9cc5bc25ee3b8b245a3dbe090fec71a2f02f4cea1acf1f75c92df242"} Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.188470 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.326630 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-inventory\") pod \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.327074 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-ssh-key\") pod \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.327150 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkprb\" (UniqueName: \"kubernetes.io/projected/0f19467b-be6d-4600-8e1e-4bcb5627e44f-kube-api-access-qkprb\") pod \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\" (UID: \"0f19467b-be6d-4600-8e1e-4bcb5627e44f\") " Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.335971 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f19467b-be6d-4600-8e1e-4bcb5627e44f-kube-api-access-qkprb" (OuterVolumeSpecName: "kube-api-access-qkprb") pod "0f19467b-be6d-4600-8e1e-4bcb5627e44f" (UID: "0f19467b-be6d-4600-8e1e-4bcb5627e44f"). InnerVolumeSpecName "kube-api-access-qkprb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.365786 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-inventory" (OuterVolumeSpecName: "inventory") pod "0f19467b-be6d-4600-8e1e-4bcb5627e44f" (UID: "0f19467b-be6d-4600-8e1e-4bcb5627e44f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.371893 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0f19467b-be6d-4600-8e1e-4bcb5627e44f" (UID: "0f19467b-be6d-4600-8e1e-4bcb5627e44f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.431076 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.431115 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0f19467b-be6d-4600-8e1e-4bcb5627e44f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.431134 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkprb\" (UniqueName: \"kubernetes.io/projected/0f19467b-be6d-4600-8e1e-4bcb5627e44f-kube-api-access-qkprb\") on node \"crc\" DevicePath \"\"" Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.666947 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" event={"ID":"0f19467b-be6d-4600-8e1e-4bcb5627e44f","Type":"ContainerDied","Data":"ba6f9cf8f82f724fbb4a03e397fd502f0a3c1bbd82ab747ab68b0f462ab6e5b2"} Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.666982 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf" Dec 09 17:43:48 crc kubenswrapper[4853]: I1209 17:43:48.666999 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba6f9cf8f82f724fbb4a03e397fd502f0a3c1bbd82ab747ab68b0f462ab6e5b2" Dec 09 17:43:51 crc kubenswrapper[4853]: E1209 17:43:51.570520 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:43:58 crc kubenswrapper[4853]: I1209 17:43:58.592945 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:43:58 crc kubenswrapper[4853]: I1209 17:43:58.593638 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:43:58 crc kubenswrapper[4853]: I1209 17:43:58.593680 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:43:58 crc kubenswrapper[4853]: I1209 17:43:58.594626 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d549d2183bb087074c6104339faa2cc893dc6d092dd3ba1b2b8b10fbb7b84db"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:43:58 crc kubenswrapper[4853]: I1209 17:43:58.594688 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://2d549d2183bb087074c6104339faa2cc893dc6d092dd3ba1b2b8b10fbb7b84db" gracePeriod=600 Dec 09 17:43:58 crc kubenswrapper[4853]: I1209 17:43:58.793024 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="2d549d2183bb087074c6104339faa2cc893dc6d092dd3ba1b2b8b10fbb7b84db" exitCode=0 Dec 09 17:43:58 crc kubenswrapper[4853]: I1209 17:43:58.793105 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"2d549d2183bb087074c6104339faa2cc893dc6d092dd3ba1b2b8b10fbb7b84db"} Dec 09 17:43:58 crc kubenswrapper[4853]: I1209 17:43:58.793569 4853 scope.go:117] "RemoveContainer" containerID="f792f6b96ec3e59c104491e444b2afe4c87ff52746f588f359407e31251c966e" Dec 09 17:43:59 crc kubenswrapper[4853]: I1209 17:43:59.806027 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c"} Dec 09 17:44:01 crc kubenswrapper[4853]: E1209 17:44:01.572422 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.065741 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc"] Dec 09 17:44:05 crc kubenswrapper[4853]: E1209 17:44:05.066849 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f19467b-be6d-4600-8e1e-4bcb5627e44f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.066867 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f19467b-be6d-4600-8e1e-4bcb5627e44f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:44:05 crc kubenswrapper[4853]: E1209 17:44:05.066884 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c51666-5298-4236-9fcd-c8e7734f592e" containerName="extract-content" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.066890 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c51666-5298-4236-9fcd-c8e7734f592e" containerName="extract-content" Dec 09 17:44:05 crc kubenswrapper[4853]: E1209 17:44:05.066931 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c51666-5298-4236-9fcd-c8e7734f592e" containerName="extract-utilities" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.066938 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c51666-5298-4236-9fcd-c8e7734f592e" containerName="extract-utilities" Dec 09 17:44:05 crc kubenswrapper[4853]: E1209 17:44:05.066944 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c51666-5298-4236-9fcd-c8e7734f592e" containerName="registry-server" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.066950 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c51666-5298-4236-9fcd-c8e7734f592e" containerName="registry-server" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.067185 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f19467b-be6d-4600-8e1e-4bcb5627e44f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.067195 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c51666-5298-4236-9fcd-c8e7734f592e" containerName="registry-server" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.068121 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.071995 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.072283 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.072514 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.072678 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.086228 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc"] Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.155909 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5fdw\" (UniqueName: \"kubernetes.io/projected/a6917b95-0219-402f-8309-76ad558f9756-kube-api-access-q5fdw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.156022 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.156045 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.257792 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5fdw\" (UniqueName: \"kubernetes.io/projected/a6917b95-0219-402f-8309-76ad558f9756-kube-api-access-q5fdw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.257907 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.257926 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.264519 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.268102 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.281737 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5fdw\" (UniqueName: \"kubernetes.io/projected/a6917b95-0219-402f-8309-76ad558f9756-kube-api-access-q5fdw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:05 crc kubenswrapper[4853]: I1209 17:44:05.402641 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:44:06 crc kubenswrapper[4853]: I1209 17:44:06.011949 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc"] Dec 09 17:44:06 crc kubenswrapper[4853]: E1209 17:44:06.569967 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:44:06 crc kubenswrapper[4853]: I1209 17:44:06.878811 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" event={"ID":"a6917b95-0219-402f-8309-76ad558f9756","Type":"ContainerStarted","Data":"f6b767787a2aa3af7e6c2e2cd3e396e355abd7195f9436e0f26d10c3b7328554"} Dec 09 17:44:06 crc kubenswrapper[4853]: I1209 17:44:06.879127 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" event={"ID":"a6917b95-0219-402f-8309-76ad558f9756","Type":"ContainerStarted","Data":"110f6265e56e02f078f5019aeae294ed49718c1b560ef475fc8b74d780304e1f"} Dec 09 17:44:06 crc kubenswrapper[4853]: I1209 17:44:06.893463 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" podStartSLOduration=1.361665789 podStartE2EDuration="1.893440838s" podCreationTimestamp="2025-12-09 17:44:05 +0000 UTC" firstStartedPulling="2025-12-09 17:44:06.026052446 +0000 UTC m=+2872.960791628" lastFinishedPulling="2025-12-09 17:44:06.557827495 +0000 UTC m=+2873.492566677" observedRunningTime="2025-12-09 17:44:06.891366493 +0000 UTC m=+2873.826105675" watchObservedRunningTime="2025-12-09 17:44:06.893440838 +0000 UTC m=+2873.828180020" Dec 09 17:44:15 crc kubenswrapper[4853]: E1209 17:44:15.570960 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:44:19 crc kubenswrapper[4853]: E1209 17:44:19.569877 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.269006 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2bzhv"] Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.272070 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.300219 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bzhv"] Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.411270 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-utilities\") pod \"redhat-marketplace-2bzhv\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.411573 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-catalog-content\") pod \"redhat-marketplace-2bzhv\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.412557 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tdmh\" (UniqueName: \"kubernetes.io/projected/1e92135a-4c58-4431-86bf-5ffe0c092753-kube-api-access-8tdmh\") pod \"redhat-marketplace-2bzhv\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.514914 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-catalog-content\") pod \"redhat-marketplace-2bzhv\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.515015 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tdmh\" (UniqueName: \"kubernetes.io/projected/1e92135a-4c58-4431-86bf-5ffe0c092753-kube-api-access-8tdmh\") pod \"redhat-marketplace-2bzhv\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.515063 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-utilities\") pod \"redhat-marketplace-2bzhv\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.515483 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-catalog-content\") pod \"redhat-marketplace-2bzhv\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.515554 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-utilities\") pod \"redhat-marketplace-2bzhv\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.545380 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tdmh\" (UniqueName: \"kubernetes.io/projected/1e92135a-4c58-4431-86bf-5ffe0c092753-kube-api-access-8tdmh\") pod \"redhat-marketplace-2bzhv\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:26 crc kubenswrapper[4853]: E1209 17:44:26.583702 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:44:26 crc kubenswrapper[4853]: I1209 17:44:26.613085 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:27 crc kubenswrapper[4853]: I1209 17:44:27.117012 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bzhv"] Dec 09 17:44:27 crc kubenswrapper[4853]: W1209 17:44:27.117863 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e92135a_4c58_4431_86bf_5ffe0c092753.slice/crio-12acae466216a38a08e72e858401be46386f24c1ed4c65e672e985493d6c197c WatchSource:0}: Error finding container 12acae466216a38a08e72e858401be46386f24c1ed4c65e672e985493d6c197c: Status 404 returned error can't find the container with id 12acae466216a38a08e72e858401be46386f24c1ed4c65e672e985493d6c197c Dec 09 17:44:27 crc kubenswrapper[4853]: I1209 17:44:27.150886 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bzhv" event={"ID":"1e92135a-4c58-4431-86bf-5ffe0c092753","Type":"ContainerStarted","Data":"12acae466216a38a08e72e858401be46386f24c1ed4c65e672e985493d6c197c"} Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.167533 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerID="2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6" exitCode=0 Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.167645 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bzhv" event={"ID":"1e92135a-4c58-4431-86bf-5ffe0c092753","Type":"ContainerDied","Data":"2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6"} Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.665524 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vknch"] Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.668296 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.682604 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vknch"] Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.774383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd54q\" (UniqueName: \"kubernetes.io/projected/5721d9ba-ce7a-4adc-8945-9497a5adbda1-kube-api-access-bd54q\") pod \"community-operators-vknch\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.774470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-utilities\") pod \"community-operators-vknch\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.774543 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-catalog-content\") pod \"community-operators-vknch\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.877344 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd54q\" (UniqueName: \"kubernetes.io/projected/5721d9ba-ce7a-4adc-8945-9497a5adbda1-kube-api-access-bd54q\") pod \"community-operators-vknch\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.877769 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-utilities\") pod \"community-operators-vknch\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.877817 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-catalog-content\") pod \"community-operators-vknch\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.878307 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-utilities\") pod \"community-operators-vknch\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.878394 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-catalog-content\") pod \"community-operators-vknch\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:28 crc kubenswrapper[4853]: I1209 17:44:28.909132 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd54q\" (UniqueName: \"kubernetes.io/projected/5721d9ba-ce7a-4adc-8945-9497a5adbda1-kube-api-access-bd54q\") pod \"community-operators-vknch\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:29 crc kubenswrapper[4853]: I1209 17:44:29.001272 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:29 crc kubenswrapper[4853]: I1209 17:44:29.211366 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bzhv" event={"ID":"1e92135a-4c58-4431-86bf-5ffe0c092753","Type":"ContainerStarted","Data":"ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6"} Dec 09 17:44:29 crc kubenswrapper[4853]: I1209 17:44:29.647425 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vknch"] Dec 09 17:44:29 crc kubenswrapper[4853]: W1209 17:44:29.647821 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5721d9ba_ce7a_4adc_8945_9497a5adbda1.slice/crio-797e83702f2f6134253bd48f2bf2d7ac95976327b020f4865958017392845589 WatchSource:0}: Error finding container 797e83702f2f6134253bd48f2bf2d7ac95976327b020f4865958017392845589: Status 404 returned error can't find the container with id 797e83702f2f6134253bd48f2bf2d7ac95976327b020f4865958017392845589 Dec 09 17:44:30 crc kubenswrapper[4853]: I1209 17:44:30.226204 4853 generic.go:334] "Generic (PLEG): container finished" podID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerID="3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5" exitCode=0 Dec 09 17:44:30 crc kubenswrapper[4853]: I1209 17:44:30.226264 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vknch" event={"ID":"5721d9ba-ce7a-4adc-8945-9497a5adbda1","Type":"ContainerDied","Data":"3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5"} Dec 09 17:44:30 crc kubenswrapper[4853]: I1209 17:44:30.228094 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vknch" event={"ID":"5721d9ba-ce7a-4adc-8945-9497a5adbda1","Type":"ContainerStarted","Data":"797e83702f2f6134253bd48f2bf2d7ac95976327b020f4865958017392845589"} Dec 09 17:44:30 crc kubenswrapper[4853]: I1209 17:44:30.234384 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerID="ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6" exitCode=0 Dec 09 17:44:30 crc kubenswrapper[4853]: I1209 17:44:30.234485 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bzhv" event={"ID":"1e92135a-4c58-4431-86bf-5ffe0c092753","Type":"ContainerDied","Data":"ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6"} Dec 09 17:44:31 crc kubenswrapper[4853]: I1209 17:44:31.246133 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vknch" event={"ID":"5721d9ba-ce7a-4adc-8945-9497a5adbda1","Type":"ContainerStarted","Data":"83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c"} Dec 09 17:44:31 crc kubenswrapper[4853]: I1209 17:44:31.251282 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bzhv" event={"ID":"1e92135a-4c58-4431-86bf-5ffe0c092753","Type":"ContainerStarted","Data":"884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42"} Dec 09 17:44:31 crc kubenswrapper[4853]: I1209 17:44:31.295313 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2bzhv" podStartSLOduration=2.830958341 podStartE2EDuration="5.295287886s" podCreationTimestamp="2025-12-09 17:44:26 +0000 UTC" firstStartedPulling="2025-12-09 17:44:28.170684636 +0000 UTC m=+2895.105423858" lastFinishedPulling="2025-12-09 17:44:30.635014221 +0000 UTC m=+2897.569753403" observedRunningTime="2025-12-09 17:44:31.292315037 +0000 UTC m=+2898.227054219" watchObservedRunningTime="2025-12-09 17:44:31.295287886 +0000 UTC m=+2898.230027068" Dec 09 17:44:32 crc kubenswrapper[4853]: I1209 17:44:32.269711 4853 generic.go:334] "Generic (PLEG): container finished" podID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerID="83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c" exitCode=0 Dec 09 17:44:32 crc kubenswrapper[4853]: I1209 17:44:32.269796 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vknch" event={"ID":"5721d9ba-ce7a-4adc-8945-9497a5adbda1","Type":"ContainerDied","Data":"83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c"} Dec 09 17:44:32 crc kubenswrapper[4853]: E1209 17:44:32.569206 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:44:33 crc kubenswrapper[4853]: I1209 17:44:33.284241 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vknch" event={"ID":"5721d9ba-ce7a-4adc-8945-9497a5adbda1","Type":"ContainerStarted","Data":"d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07"} Dec 09 17:44:33 crc kubenswrapper[4853]: I1209 17:44:33.311283 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vknch" podStartSLOduration=2.890581851 podStartE2EDuration="5.311263633s" podCreationTimestamp="2025-12-09 17:44:28 +0000 UTC" firstStartedPulling="2025-12-09 17:44:30.229897487 +0000 UTC m=+2897.164636669" lastFinishedPulling="2025-12-09 17:44:32.650579269 +0000 UTC m=+2899.585318451" observedRunningTime="2025-12-09 17:44:33.303856547 +0000 UTC m=+2900.238595749" watchObservedRunningTime="2025-12-09 17:44:33.311263633 +0000 UTC m=+2900.246002815" Dec 09 17:44:36 crc kubenswrapper[4853]: I1209 17:44:36.613707 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:36 crc kubenswrapper[4853]: I1209 17:44:36.614245 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:36 crc kubenswrapper[4853]: I1209 17:44:36.684256 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:37 crc kubenswrapper[4853]: I1209 17:44:37.424447 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:38 crc kubenswrapper[4853]: I1209 17:44:38.062803 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bzhv"] Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.001824 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.001902 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.099309 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.352471 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2bzhv" podUID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerName="registry-server" containerID="cri-o://884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42" gracePeriod=2 Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.417610 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.826563 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.860939 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-utilities\") pod \"1e92135a-4c58-4431-86bf-5ffe0c092753\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.861129 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-catalog-content\") pod \"1e92135a-4c58-4431-86bf-5ffe0c092753\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.861398 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdmh\" (UniqueName: \"kubernetes.io/projected/1e92135a-4c58-4431-86bf-5ffe0c092753-kube-api-access-8tdmh\") pod \"1e92135a-4c58-4431-86bf-5ffe0c092753\" (UID: \"1e92135a-4c58-4431-86bf-5ffe0c092753\") " Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.861690 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-utilities" (OuterVolumeSpecName: "utilities") pod "1e92135a-4c58-4431-86bf-5ffe0c092753" (UID: "1e92135a-4c58-4431-86bf-5ffe0c092753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.871589 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e92135a-4c58-4431-86bf-5ffe0c092753-kube-api-access-8tdmh" (OuterVolumeSpecName: "kube-api-access-8tdmh") pod "1e92135a-4c58-4431-86bf-5ffe0c092753" (UID: "1e92135a-4c58-4431-86bf-5ffe0c092753"). InnerVolumeSpecName "kube-api-access-8tdmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.888986 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e92135a-4c58-4431-86bf-5ffe0c092753" (UID: "1e92135a-4c58-4431-86bf-5ffe0c092753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.964568 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdmh\" (UniqueName: \"kubernetes.io/projected/1e92135a-4c58-4431-86bf-5ffe0c092753-kube-api-access-8tdmh\") on node \"crc\" DevicePath \"\"" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.964885 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:44:39 crc kubenswrapper[4853]: I1209 17:44:39.964999 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e92135a-4c58-4431-86bf-5ffe0c092753-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.369526 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerID="884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42" exitCode=0 Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.369679 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bzhv" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.369720 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bzhv" event={"ID":"1e92135a-4c58-4431-86bf-5ffe0c092753","Type":"ContainerDied","Data":"884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42"} Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.369811 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bzhv" event={"ID":"1e92135a-4c58-4431-86bf-5ffe0c092753","Type":"ContainerDied","Data":"12acae466216a38a08e72e858401be46386f24c1ed4c65e672e985493d6c197c"} Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.369842 4853 scope.go:117] "RemoveContainer" containerID="884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.421119 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bzhv"] Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.432936 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bzhv"] Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.438714 4853 scope.go:117] "RemoveContainer" containerID="ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.468576 4853 scope.go:117] "RemoveContainer" containerID="2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.541068 4853 scope.go:117] "RemoveContainer" containerID="884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42" Dec 09 17:44:40 crc kubenswrapper[4853]: E1209 17:44:40.541749 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42\": container with ID starting with 884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42 not found: ID does not exist" containerID="884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.541804 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42"} err="failed to get container status \"884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42\": rpc error: code = NotFound desc = could not find container \"884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42\": container with ID starting with 884d8a606b0502daebc8807e8cd2180673abc9e5c79289feb37fb89592ea1c42 not found: ID does not exist" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.541845 4853 scope.go:117] "RemoveContainer" containerID="ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6" Dec 09 17:44:40 crc kubenswrapper[4853]: E1209 17:44:40.542763 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6\": container with ID starting with ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6 not found: ID does not exist" containerID="ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.542802 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6"} err="failed to get container status \"ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6\": rpc error: code = NotFound desc = could not find container \"ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6\": container with ID starting with ec12cdb7d009a2f5a146e02587ca2b256f20d58cce66ef4b3e0ffcff49cb8ef6 not found: ID does not exist" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.542829 4853 scope.go:117] "RemoveContainer" containerID="2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6" Dec 09 17:44:40 crc kubenswrapper[4853]: E1209 17:44:40.543281 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6\": container with ID starting with 2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6 not found: ID does not exist" containerID="2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.543333 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6"} err="failed to get container status \"2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6\": rpc error: code = NotFound desc = could not find container \"2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6\": container with ID starting with 2d4339584ce83143003beca75125e4a30c25862c750f596516ba0658d83962c6 not found: ID does not exist" Dec 09 17:44:40 crc kubenswrapper[4853]: E1209 17:44:40.569661 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:44:40 crc kubenswrapper[4853]: I1209 17:44:40.855214 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vknch"] Dec 09 17:44:41 crc kubenswrapper[4853]: I1209 17:44:41.384219 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vknch" podUID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerName="registry-server" containerID="cri-o://d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07" gracePeriod=2 Dec 09 17:44:41 crc kubenswrapper[4853]: I1209 17:44:41.584980 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e92135a-4c58-4431-86bf-5ffe0c092753" path="/var/lib/kubelet/pods/1e92135a-4c58-4431-86bf-5ffe0c092753/volumes" Dec 09 17:44:41 crc kubenswrapper[4853]: I1209 17:44:41.949454 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.128038 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd54q\" (UniqueName: \"kubernetes.io/projected/5721d9ba-ce7a-4adc-8945-9497a5adbda1-kube-api-access-bd54q\") pod \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.128290 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-catalog-content\") pod \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.128444 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-utilities\") pod \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\" (UID: \"5721d9ba-ce7a-4adc-8945-9497a5adbda1\") " Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.129765 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-utilities" (OuterVolumeSpecName: "utilities") pod "5721d9ba-ce7a-4adc-8945-9497a5adbda1" (UID: "5721d9ba-ce7a-4adc-8945-9497a5adbda1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.138770 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5721d9ba-ce7a-4adc-8945-9497a5adbda1-kube-api-access-bd54q" (OuterVolumeSpecName: "kube-api-access-bd54q") pod "5721d9ba-ce7a-4adc-8945-9497a5adbda1" (UID: "5721d9ba-ce7a-4adc-8945-9497a5adbda1"). InnerVolumeSpecName "kube-api-access-bd54q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.202031 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5721d9ba-ce7a-4adc-8945-9497a5adbda1" (UID: "5721d9ba-ce7a-4adc-8945-9497a5adbda1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.231567 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.231618 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5721d9ba-ce7a-4adc-8945-9497a5adbda1-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.231633 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd54q\" (UniqueName: \"kubernetes.io/projected/5721d9ba-ce7a-4adc-8945-9497a5adbda1-kube-api-access-bd54q\") on node \"crc\" DevicePath \"\"" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.400809 4853 generic.go:334] "Generic (PLEG): container finished" podID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerID="d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07" exitCode=0 Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.400863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vknch" event={"ID":"5721d9ba-ce7a-4adc-8945-9497a5adbda1","Type":"ContainerDied","Data":"d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07"} Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.400894 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vknch" event={"ID":"5721d9ba-ce7a-4adc-8945-9497a5adbda1","Type":"ContainerDied","Data":"797e83702f2f6134253bd48f2bf2d7ac95976327b020f4865958017392845589"} Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.400916 4853 scope.go:117] "RemoveContainer" containerID="d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.401075 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vknch" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.455852 4853 scope.go:117] "RemoveContainer" containerID="83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.459140 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vknch"] Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.485224 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vknch"] Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.490398 4853 scope.go:117] "RemoveContainer" containerID="3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.534933 4853 scope.go:117] "RemoveContainer" containerID="d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07" Dec 09 17:44:42 crc kubenswrapper[4853]: E1209 17:44:42.535751 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07\": container with ID starting with d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07 not found: ID does not exist" containerID="d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.535817 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07"} err="failed to get container status \"d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07\": rpc error: code = NotFound desc = could not find container \"d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07\": container with ID starting with d85486b30b956b7b83c820cbb436593301258d3748f164c03b5379da81fe6a07 not found: ID does not exist" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.535859 4853 scope.go:117] "RemoveContainer" containerID="83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c" Dec 09 17:44:42 crc kubenswrapper[4853]: E1209 17:44:42.536362 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c\": container with ID starting with 83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c not found: ID does not exist" containerID="83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.536388 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c"} err="failed to get container status \"83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c\": rpc error: code = NotFound desc = could not find container \"83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c\": container with ID starting with 83aac2d0651ead73cd74baa5a09d90b717d1566fdb9eda458a3fb228afda8e1c not found: ID does not exist" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.536413 4853 scope.go:117] "RemoveContainer" containerID="3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5" Dec 09 17:44:42 crc kubenswrapper[4853]: E1209 17:44:42.536768 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5\": container with ID starting with 3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5 not found: ID does not exist" containerID="3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5" Dec 09 17:44:42 crc kubenswrapper[4853]: I1209 17:44:42.536807 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5"} err="failed to get container status \"3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5\": rpc error: code = NotFound desc = could not find container \"3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5\": container with ID starting with 3aa625c60b832b0f0d4d9e13f6ec239b5c19be2a649a25af74c3002b7dd543e5 not found: ID does not exist" Dec 09 17:44:43 crc kubenswrapper[4853]: I1209 17:44:43.583006 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" path="/var/lib/kubelet/pods/5721d9ba-ce7a-4adc-8945-9497a5adbda1/volumes" Dec 09 17:44:46 crc kubenswrapper[4853]: E1209 17:44:46.572864 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:44:51 crc kubenswrapper[4853]: E1209 17:44:51.571881 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:44:57 crc kubenswrapper[4853]: E1209 17:44:57.570265 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.165994 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86"] Dec 09 17:45:00 crc kubenswrapper[4853]: E1209 17:45:00.167145 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerName="extract-content" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.167169 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerName="extract-content" Dec 09 17:45:00 crc kubenswrapper[4853]: E1209 17:45:00.167209 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerName="extract-content" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.167226 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerName="extract-content" Dec 09 17:45:00 crc kubenswrapper[4853]: E1209 17:45:00.167292 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerName="registry-server" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.167302 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerName="registry-server" Dec 09 17:45:00 crc kubenswrapper[4853]: E1209 17:45:00.167317 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerName="extract-utilities" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.167326 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerName="extract-utilities" Dec 09 17:45:00 crc kubenswrapper[4853]: E1209 17:45:00.167338 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerName="registry-server" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.167346 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerName="registry-server" Dec 09 17:45:00 crc kubenswrapper[4853]: E1209 17:45:00.167362 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerName="extract-utilities" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.167370 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerName="extract-utilities" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.167667 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5721d9ba-ce7a-4adc-8945-9497a5adbda1" containerName="registry-server" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.167701 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e92135a-4c58-4431-86bf-5ffe0c092753" containerName="registry-server" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.168751 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.170772 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.171729 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.180407 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86"] Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.348316 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b94aea8-6525-4e97-b3bc-66eea871224b-config-volume\") pod \"collect-profiles-29421705-dlv86\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.348395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b94aea8-6525-4e97-b3bc-66eea871224b-secret-volume\") pod \"collect-profiles-29421705-dlv86\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.348447 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4zm\" (UniqueName: \"kubernetes.io/projected/3b94aea8-6525-4e97-b3bc-66eea871224b-kube-api-access-jr4zm\") pod \"collect-profiles-29421705-dlv86\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.450821 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b94aea8-6525-4e97-b3bc-66eea871224b-config-volume\") pod \"collect-profiles-29421705-dlv86\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.451169 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b94aea8-6525-4e97-b3bc-66eea871224b-secret-volume\") pod \"collect-profiles-29421705-dlv86\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.451340 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr4zm\" (UniqueName: \"kubernetes.io/projected/3b94aea8-6525-4e97-b3bc-66eea871224b-kube-api-access-jr4zm\") pod \"collect-profiles-29421705-dlv86\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.451728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b94aea8-6525-4e97-b3bc-66eea871224b-config-volume\") pod \"collect-profiles-29421705-dlv86\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.456955 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b94aea8-6525-4e97-b3bc-66eea871224b-secret-volume\") pod \"collect-profiles-29421705-dlv86\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.472900 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr4zm\" (UniqueName: \"kubernetes.io/projected/3b94aea8-6525-4e97-b3bc-66eea871224b-kube-api-access-jr4zm\") pod \"collect-profiles-29421705-dlv86\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.518377 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:00 crc kubenswrapper[4853]: I1209 17:45:00.974030 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86"] Dec 09 17:45:01 crc kubenswrapper[4853]: I1209 17:45:01.679016 4853 generic.go:334] "Generic (PLEG): container finished" podID="3b94aea8-6525-4e97-b3bc-66eea871224b" containerID="e2fe9913441240ba53f26e8711d349963e1f3ccaceadbc922402220d00c203c4" exitCode=0 Dec 09 17:45:01 crc kubenswrapper[4853]: I1209 17:45:01.679085 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" event={"ID":"3b94aea8-6525-4e97-b3bc-66eea871224b","Type":"ContainerDied","Data":"e2fe9913441240ba53f26e8711d349963e1f3ccaceadbc922402220d00c203c4"} Dec 09 17:45:01 crc kubenswrapper[4853]: I1209 17:45:01.679338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" event={"ID":"3b94aea8-6525-4e97-b3bc-66eea871224b","Type":"ContainerStarted","Data":"50debfceee2cec296ba3f5b863b595beed059991477bb8f6f3d4c209cc986a5d"} Dec 09 17:45:02 crc kubenswrapper[4853]: E1209 17:45:02.571683 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.089446 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.234693 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b94aea8-6525-4e97-b3bc-66eea871224b-config-volume\") pod \"3b94aea8-6525-4e97-b3bc-66eea871224b\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.234964 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b94aea8-6525-4e97-b3bc-66eea871224b-secret-volume\") pod \"3b94aea8-6525-4e97-b3bc-66eea871224b\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.235020 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr4zm\" (UniqueName: \"kubernetes.io/projected/3b94aea8-6525-4e97-b3bc-66eea871224b-kube-api-access-jr4zm\") pod \"3b94aea8-6525-4e97-b3bc-66eea871224b\" (UID: \"3b94aea8-6525-4e97-b3bc-66eea871224b\") " Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.235511 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b94aea8-6525-4e97-b3bc-66eea871224b-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b94aea8-6525-4e97-b3bc-66eea871224b" (UID: "3b94aea8-6525-4e97-b3bc-66eea871224b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.242293 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b94aea8-6525-4e97-b3bc-66eea871224b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b94aea8-6525-4e97-b3bc-66eea871224b" (UID: "3b94aea8-6525-4e97-b3bc-66eea871224b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.246492 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b94aea8-6525-4e97-b3bc-66eea871224b-kube-api-access-jr4zm" (OuterVolumeSpecName: "kube-api-access-jr4zm") pod "3b94aea8-6525-4e97-b3bc-66eea871224b" (UID: "3b94aea8-6525-4e97-b3bc-66eea871224b"). InnerVolumeSpecName "kube-api-access-jr4zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.337514 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b94aea8-6525-4e97-b3bc-66eea871224b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.337548 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr4zm\" (UniqueName: \"kubernetes.io/projected/3b94aea8-6525-4e97-b3bc-66eea871224b-kube-api-access-jr4zm\") on node \"crc\" DevicePath \"\"" Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.337557 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b94aea8-6525-4e97-b3bc-66eea871224b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.703474 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" event={"ID":"3b94aea8-6525-4e97-b3bc-66eea871224b","Type":"ContainerDied","Data":"50debfceee2cec296ba3f5b863b595beed059991477bb8f6f3d4c209cc986a5d"} Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.703578 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50debfceee2cec296ba3f5b863b595beed059991477bb8f6f3d4c209cc986a5d" Dec 09 17:45:03 crc kubenswrapper[4853]: I1209 17:45:03.703589 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86" Dec 09 17:45:04 crc kubenswrapper[4853]: I1209 17:45:04.185566 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68"] Dec 09 17:45:04 crc kubenswrapper[4853]: I1209 17:45:04.196193 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421660-9mq68"] Dec 09 17:45:05 crc kubenswrapper[4853]: I1209 17:45:05.580419 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53665226-059a-426d-9d71-9ee8ecc2c2b4" path="/var/lib/kubelet/pods/53665226-059a-426d-9d71-9ee8ecc2c2b4/volumes" Dec 09 17:45:09 crc kubenswrapper[4853]: E1209 17:45:09.572945 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:45:13 crc kubenswrapper[4853]: E1209 17:45:13.576018 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:45:20 crc kubenswrapper[4853]: E1209 17:45:20.570353 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:45:26 crc kubenswrapper[4853]: E1209 17:45:26.571649 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:45:29 crc kubenswrapper[4853]: I1209 17:45:29.942403 4853 scope.go:117] "RemoveContainer" containerID="89a3c70c364dc675473593eb5e6b97266dc1ca7566df2838259c02000211c6e8" Dec 09 17:45:33 crc kubenswrapper[4853]: E1209 17:45:33.579758 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:45:41 crc kubenswrapper[4853]: E1209 17:45:41.570529 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:45:44 crc kubenswrapper[4853]: E1209 17:45:44.569771 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:45:55 crc kubenswrapper[4853]: E1209 17:45:55.570634 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:45:56 crc kubenswrapper[4853]: E1209 17:45:56.569700 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:45:58 crc kubenswrapper[4853]: I1209 17:45:58.593937 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:45:58 crc kubenswrapper[4853]: I1209 17:45:58.594903 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:46:06 crc kubenswrapper[4853]: E1209 17:46:06.569349 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:46:07 crc kubenswrapper[4853]: I1209 17:46:07.571291 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:46:07 crc kubenswrapper[4853]: E1209 17:46:07.694710 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:46:07 crc kubenswrapper[4853]: E1209 17:46:07.694769 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:46:07 crc kubenswrapper[4853]: E1209 17:46:07.694892 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:46:07 crc kubenswrapper[4853]: E1209 17:46:07.696302 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:46:21 crc kubenswrapper[4853]: E1209 17:46:21.695907 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:46:21 crc kubenswrapper[4853]: E1209 17:46:21.696514 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:46:21 crc kubenswrapper[4853]: E1209 17:46:21.696685 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:46:21 crc kubenswrapper[4853]: E1209 17:46:21.698085 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:46:22 crc kubenswrapper[4853]: E1209 17:46:22.569887 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:46:28 crc kubenswrapper[4853]: I1209 17:46:28.593101 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:46:28 crc kubenswrapper[4853]: I1209 17:46:28.593700 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:46:34 crc kubenswrapper[4853]: E1209 17:46:34.571336 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:46:36 crc kubenswrapper[4853]: E1209 17:46:36.569109 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:46:47 crc kubenswrapper[4853]: E1209 17:46:47.571798 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:46:49 crc kubenswrapper[4853]: E1209 17:46:49.569736 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:46:58 crc kubenswrapper[4853]: I1209 17:46:58.592827 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:46:58 crc kubenswrapper[4853]: I1209 17:46:58.593315 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:46:58 crc kubenswrapper[4853]: I1209 17:46:58.593373 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:46:58 crc kubenswrapper[4853]: I1209 17:46:58.594252 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:46:58 crc kubenswrapper[4853]: I1209 17:46:58.594309 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" gracePeriod=600 Dec 09 17:46:58 crc kubenswrapper[4853]: E1209 17:46:58.731836 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:46:59 crc kubenswrapper[4853]: I1209 17:46:59.265828 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" exitCode=0 Dec 09 17:46:59 crc kubenswrapper[4853]: I1209 17:46:59.265904 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c"} Dec 09 17:46:59 crc kubenswrapper[4853]: I1209 17:46:59.266182 4853 scope.go:117] "RemoveContainer" containerID="2d549d2183bb087074c6104339faa2cc893dc6d092dd3ba1b2b8b10fbb7b84db" Dec 09 17:46:59 crc kubenswrapper[4853]: I1209 17:46:59.266956 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:46:59 crc kubenswrapper[4853]: E1209 17:46:59.267289 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:47:02 crc kubenswrapper[4853]: E1209 17:47:02.593769 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:47:04 crc kubenswrapper[4853]: E1209 17:47:04.569060 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:47:09 crc kubenswrapper[4853]: I1209 17:47:09.567302 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:47:09 crc kubenswrapper[4853]: E1209 17:47:09.568138 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:47:14 crc kubenswrapper[4853]: E1209 17:47:14.594861 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:47:19 crc kubenswrapper[4853]: E1209 17:47:19.570032 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:47:22 crc kubenswrapper[4853]: I1209 17:47:22.567435 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:47:22 crc kubenswrapper[4853]: E1209 17:47:22.568168 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:47:27 crc kubenswrapper[4853]: E1209 17:47:27.569397 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:47:31 crc kubenswrapper[4853]: E1209 17:47:31.570747 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:47:36 crc kubenswrapper[4853]: I1209 17:47:36.567346 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:47:36 crc kubenswrapper[4853]: E1209 17:47:36.568366 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:47:39 crc kubenswrapper[4853]: E1209 17:47:39.569337 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:47:42 crc kubenswrapper[4853]: E1209 17:47:42.569204 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:47:51 crc kubenswrapper[4853]: I1209 17:47:51.567037 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:47:51 crc kubenswrapper[4853]: E1209 17:47:51.568013 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:47:53 crc kubenswrapper[4853]: E1209 17:47:53.578271 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:47:57 crc kubenswrapper[4853]: E1209 17:47:57.570076 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:48:03 crc kubenswrapper[4853]: I1209 17:48:03.575094 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:48:03 crc kubenswrapper[4853]: E1209 17:48:03.576014 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:48:06 crc kubenswrapper[4853]: E1209 17:48:06.571081 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:48:10 crc kubenswrapper[4853]: E1209 17:48:10.570854 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:48:14 crc kubenswrapper[4853]: I1209 17:48:14.570997 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:48:14 crc kubenswrapper[4853]: E1209 17:48:14.571626 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:48:20 crc kubenswrapper[4853]: E1209 17:48:20.570895 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:48:21 crc kubenswrapper[4853]: E1209 17:48:21.569786 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:48:25 crc kubenswrapper[4853]: I1209 17:48:25.569629 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:48:25 crc kubenswrapper[4853]: E1209 17:48:25.570663 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:48:34 crc kubenswrapper[4853]: E1209 17:48:34.569011 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:48:35 crc kubenswrapper[4853]: E1209 17:48:35.569047 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:48:40 crc kubenswrapper[4853]: I1209 17:48:40.567673 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:48:40 crc kubenswrapper[4853]: E1209 17:48:40.568392 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:48:46 crc kubenswrapper[4853]: E1209 17:48:46.572218 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:48:47 crc kubenswrapper[4853]: E1209 17:48:47.570735 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:48:53 crc kubenswrapper[4853]: I1209 17:48:53.583168 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:48:53 crc kubenswrapper[4853]: E1209 17:48:53.584057 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:49:00 crc kubenswrapper[4853]: E1209 17:49:00.569414 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:49:02 crc kubenswrapper[4853]: E1209 17:49:02.569127 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:49:05 crc kubenswrapper[4853]: I1209 17:49:05.568747 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:49:05 crc kubenswrapper[4853]: E1209 17:49:05.570218 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:49:11 crc kubenswrapper[4853]: E1209 17:49:11.571302 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:49:15 crc kubenswrapper[4853]: E1209 17:49:15.570459 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:49:18 crc kubenswrapper[4853]: I1209 17:49:18.567181 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:49:18 crc kubenswrapper[4853]: E1209 17:49:18.567766 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:49:23 crc kubenswrapper[4853]: E1209 17:49:23.577210 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:49:28 crc kubenswrapper[4853]: E1209 17:49:28.570168 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:49:31 crc kubenswrapper[4853]: I1209 17:49:31.567906 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:49:31 crc kubenswrapper[4853]: E1209 17:49:31.569030 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:49:34 crc kubenswrapper[4853]: E1209 17:49:34.570565 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:49:40 crc kubenswrapper[4853]: E1209 17:49:40.572078 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:49:44 crc kubenswrapper[4853]: I1209 17:49:44.566952 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:49:44 crc kubenswrapper[4853]: E1209 17:49:44.567723 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:49:45 crc kubenswrapper[4853]: E1209 17:49:45.569849 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.664022 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7p9jg"] Dec 09 17:49:45 crc kubenswrapper[4853]: E1209 17:49:45.664843 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b94aea8-6525-4e97-b3bc-66eea871224b" containerName="collect-profiles" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.664860 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b94aea8-6525-4e97-b3bc-66eea871224b" containerName="collect-profiles" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.665144 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b94aea8-6525-4e97-b3bc-66eea871224b" containerName="collect-profiles" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.668034 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.674660 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7p9jg"] Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.805136 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-catalog-content\") pod \"redhat-operators-7p9jg\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.805245 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhsks\" (UniqueName: \"kubernetes.io/projected/fb130e39-5eaa-4535-b525-70c76835da53-kube-api-access-fhsks\") pod \"redhat-operators-7p9jg\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.805441 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-utilities\") pod \"redhat-operators-7p9jg\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.907712 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-utilities\") pod \"redhat-operators-7p9jg\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.907788 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-catalog-content\") pod \"redhat-operators-7p9jg\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.907873 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhsks\" (UniqueName: \"kubernetes.io/projected/fb130e39-5eaa-4535-b525-70c76835da53-kube-api-access-fhsks\") pod \"redhat-operators-7p9jg\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.908410 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-utilities\") pod \"redhat-operators-7p9jg\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.908575 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-catalog-content\") pod \"redhat-operators-7p9jg\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.930873 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhsks\" (UniqueName: \"kubernetes.io/projected/fb130e39-5eaa-4535-b525-70c76835da53-kube-api-access-fhsks\") pod \"redhat-operators-7p9jg\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:45 crc kubenswrapper[4853]: I1209 17:49:45.999523 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:46 crc kubenswrapper[4853]: I1209 17:49:46.500434 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7p9jg"] Dec 09 17:49:47 crc kubenswrapper[4853]: I1209 17:49:47.230643 4853 generic.go:334] "Generic (PLEG): container finished" podID="fb130e39-5eaa-4535-b525-70c76835da53" containerID="04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8" exitCode=0 Dec 09 17:49:47 crc kubenswrapper[4853]: I1209 17:49:47.230761 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7p9jg" event={"ID":"fb130e39-5eaa-4535-b525-70c76835da53","Type":"ContainerDied","Data":"04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8"} Dec 09 17:49:47 crc kubenswrapper[4853]: I1209 17:49:47.231241 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7p9jg" event={"ID":"fb130e39-5eaa-4535-b525-70c76835da53","Type":"ContainerStarted","Data":"6be57c17e7f3af8a977975907b8feec75f14d84dd51a1e2ac785cd0aabaff3a3"} Dec 09 17:49:48 crc kubenswrapper[4853]: I1209 17:49:48.250273 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7p9jg" event={"ID":"fb130e39-5eaa-4535-b525-70c76835da53","Type":"ContainerStarted","Data":"fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167"} Dec 09 17:49:52 crc kubenswrapper[4853]: I1209 17:49:52.290708 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7p9jg" event={"ID":"fb130e39-5eaa-4535-b525-70c76835da53","Type":"ContainerDied","Data":"fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167"} Dec 09 17:49:52 crc kubenswrapper[4853]: I1209 17:49:52.290698 4853 generic.go:334] "Generic (PLEG): container finished" podID="fb130e39-5eaa-4535-b525-70c76835da53" containerID="fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167" exitCode=0 Dec 09 17:49:53 crc kubenswrapper[4853]: I1209 17:49:53.302624 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7p9jg" event={"ID":"fb130e39-5eaa-4535-b525-70c76835da53","Type":"ContainerStarted","Data":"90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1"} Dec 09 17:49:53 crc kubenswrapper[4853]: I1209 17:49:53.334211 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7p9jg" podStartSLOduration=2.862876398 podStartE2EDuration="8.334186653s" podCreationTimestamp="2025-12-09 17:49:45 +0000 UTC" firstStartedPulling="2025-12-09 17:49:47.234138033 +0000 UTC m=+3214.168877215" lastFinishedPulling="2025-12-09 17:49:52.705448288 +0000 UTC m=+3219.640187470" observedRunningTime="2025-12-09 17:49:53.326616375 +0000 UTC m=+3220.261355557" watchObservedRunningTime="2025-12-09 17:49:53.334186653 +0000 UTC m=+3220.268925875" Dec 09 17:49:55 crc kubenswrapper[4853]: I1209 17:49:55.567177 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:49:55 crc kubenswrapper[4853]: E1209 17:49:55.567676 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:49:55 crc kubenswrapper[4853]: E1209 17:49:55.569856 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:49:55 crc kubenswrapper[4853]: I1209 17:49:55.999736 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:55 crc kubenswrapper[4853]: I1209 17:49:56.000001 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:49:57 crc kubenswrapper[4853]: I1209 17:49:57.052563 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7p9jg" podUID="fb130e39-5eaa-4535-b525-70c76835da53" containerName="registry-server" probeResult="failure" output=< Dec 09 17:49:57 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 17:49:57 crc kubenswrapper[4853]: > Dec 09 17:49:58 crc kubenswrapper[4853]: E1209 17:49:58.569183 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:50:06 crc kubenswrapper[4853]: I1209 17:50:06.048581 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:50:06 crc kubenswrapper[4853]: I1209 17:50:06.119491 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:50:06 crc kubenswrapper[4853]: I1209 17:50:06.301052 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7p9jg"] Dec 09 17:50:06 crc kubenswrapper[4853]: I1209 17:50:06.567193 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:50:06 crc kubenswrapper[4853]: E1209 17:50:06.567560 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:50:06 crc kubenswrapper[4853]: E1209 17:50:06.569230 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:50:07 crc kubenswrapper[4853]: I1209 17:50:07.531317 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7p9jg" podUID="fb130e39-5eaa-4535-b525-70c76835da53" containerName="registry-server" containerID="cri-o://90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1" gracePeriod=2 Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.070402 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.108011 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-catalog-content\") pod \"fb130e39-5eaa-4535-b525-70c76835da53\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.108226 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-utilities\") pod \"fb130e39-5eaa-4535-b525-70c76835da53\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.108277 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhsks\" (UniqueName: \"kubernetes.io/projected/fb130e39-5eaa-4535-b525-70c76835da53-kube-api-access-fhsks\") pod \"fb130e39-5eaa-4535-b525-70c76835da53\" (UID: \"fb130e39-5eaa-4535-b525-70c76835da53\") " Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.115667 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-utilities" (OuterVolumeSpecName: "utilities") pod "fb130e39-5eaa-4535-b525-70c76835da53" (UID: "fb130e39-5eaa-4535-b525-70c76835da53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.118274 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb130e39-5eaa-4535-b525-70c76835da53-kube-api-access-fhsks" (OuterVolumeSpecName: "kube-api-access-fhsks") pod "fb130e39-5eaa-4535-b525-70c76835da53" (UID: "fb130e39-5eaa-4535-b525-70c76835da53"). InnerVolumeSpecName "kube-api-access-fhsks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.211733 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.211779 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhsks\" (UniqueName: \"kubernetes.io/projected/fb130e39-5eaa-4535-b525-70c76835da53-kube-api-access-fhsks\") on node \"crc\" DevicePath \"\"" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.234370 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb130e39-5eaa-4535-b525-70c76835da53" (UID: "fb130e39-5eaa-4535-b525-70c76835da53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.314362 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb130e39-5eaa-4535-b525-70c76835da53-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.586168 4853 generic.go:334] "Generic (PLEG): container finished" podID="fb130e39-5eaa-4535-b525-70c76835da53" containerID="90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1" exitCode=0 Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.586232 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7p9jg" event={"ID":"fb130e39-5eaa-4535-b525-70c76835da53","Type":"ContainerDied","Data":"90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1"} Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.586289 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7p9jg" event={"ID":"fb130e39-5eaa-4535-b525-70c76835da53","Type":"ContainerDied","Data":"6be57c17e7f3af8a977975907b8feec75f14d84dd51a1e2ac785cd0aabaff3a3"} Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.586310 4853 scope.go:117] "RemoveContainer" containerID="90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.586932 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7p9jg" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.616403 4853 scope.go:117] "RemoveContainer" containerID="fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.642671 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7p9jg"] Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.646648 4853 scope.go:117] "RemoveContainer" containerID="04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.657844 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7p9jg"] Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.706792 4853 scope.go:117] "RemoveContainer" containerID="90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1" Dec 09 17:50:08 crc kubenswrapper[4853]: E1209 17:50:08.707280 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1\": container with ID starting with 90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1 not found: ID does not exist" containerID="90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.707339 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1"} err="failed to get container status \"90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1\": rpc error: code = NotFound desc = could not find container \"90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1\": container with ID starting with 90bbf8bc48f0a6ad430a8452217158253686ed5466ce7530f7ca9763cbc9b3c1 not found: ID does not exist" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.707384 4853 scope.go:117] "RemoveContainer" containerID="fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167" Dec 09 17:50:08 crc kubenswrapper[4853]: E1209 17:50:08.707921 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167\": container with ID starting with fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167 not found: ID does not exist" containerID="fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.707950 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167"} err="failed to get container status \"fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167\": rpc error: code = NotFound desc = could not find container \"fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167\": container with ID starting with fc6595504d08a90487215013500671b393d187e6615e40aa3329574ef9ec6167 not found: ID does not exist" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.707968 4853 scope.go:117] "RemoveContainer" containerID="04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8" Dec 09 17:50:08 crc kubenswrapper[4853]: E1209 17:50:08.708247 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8\": container with ID starting with 04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8 not found: ID does not exist" containerID="04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8" Dec 09 17:50:08 crc kubenswrapper[4853]: I1209 17:50:08.708270 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8"} err="failed to get container status \"04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8\": rpc error: code = NotFound desc = could not find container \"04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8\": container with ID starting with 04efcca74c0325bce2b10e705dc2424c47ab6dbc24cd3512de555915bf90a2e8 not found: ID does not exist" Dec 09 17:50:09 crc kubenswrapper[4853]: E1209 17:50:09.569712 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:50:09 crc kubenswrapper[4853]: I1209 17:50:09.580862 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb130e39-5eaa-4535-b525-70c76835da53" path="/var/lib/kubelet/pods/fb130e39-5eaa-4535-b525-70c76835da53/volumes" Dec 09 17:50:20 crc kubenswrapper[4853]: E1209 17:50:20.570213 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:50:21 crc kubenswrapper[4853]: I1209 17:50:21.567738 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:50:21 crc kubenswrapper[4853]: E1209 17:50:21.568138 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:50:24 crc kubenswrapper[4853]: E1209 17:50:24.569690 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:50:25 crc kubenswrapper[4853]: I1209 17:50:25.808834 4853 generic.go:334] "Generic (PLEG): container finished" podID="a6917b95-0219-402f-8309-76ad558f9756" containerID="f6b767787a2aa3af7e6c2e2cd3e396e355abd7195f9436e0f26d10c3b7328554" exitCode=2 Dec 09 17:50:25 crc kubenswrapper[4853]: I1209 17:50:25.808927 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" event={"ID":"a6917b95-0219-402f-8309-76ad558f9756","Type":"ContainerDied","Data":"f6b767787a2aa3af7e6c2e2cd3e396e355abd7195f9436e0f26d10c3b7328554"} Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.292996 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.295926 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5fdw\" (UniqueName: \"kubernetes.io/projected/a6917b95-0219-402f-8309-76ad558f9756-kube-api-access-q5fdw\") pod \"a6917b95-0219-402f-8309-76ad558f9756\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.296203 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-ssh-key\") pod \"a6917b95-0219-402f-8309-76ad558f9756\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.296369 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-inventory\") pod \"a6917b95-0219-402f-8309-76ad558f9756\" (UID: \"a6917b95-0219-402f-8309-76ad558f9756\") " Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.311030 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6917b95-0219-402f-8309-76ad558f9756-kube-api-access-q5fdw" (OuterVolumeSpecName: "kube-api-access-q5fdw") pod "a6917b95-0219-402f-8309-76ad558f9756" (UID: "a6917b95-0219-402f-8309-76ad558f9756"). InnerVolumeSpecName "kube-api-access-q5fdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.336484 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a6917b95-0219-402f-8309-76ad558f9756" (UID: "a6917b95-0219-402f-8309-76ad558f9756"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.368386 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-inventory" (OuterVolumeSpecName: "inventory") pod "a6917b95-0219-402f-8309-76ad558f9756" (UID: "a6917b95-0219-402f-8309-76ad558f9756"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.403520 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.403558 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5fdw\" (UniqueName: \"kubernetes.io/projected/a6917b95-0219-402f-8309-76ad558f9756-kube-api-access-q5fdw\") on node \"crc\" DevicePath \"\"" Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.403573 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6917b95-0219-402f-8309-76ad558f9756-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.833526 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" event={"ID":"a6917b95-0219-402f-8309-76ad558f9756","Type":"ContainerDied","Data":"110f6265e56e02f078f5019aeae294ed49718c1b560ef475fc8b74d780304e1f"} Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.833584 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="110f6265e56e02f078f5019aeae294ed49718c1b560ef475fc8b74d780304e1f" Dec 09 17:50:27 crc kubenswrapper[4853]: I1209 17:50:27.833906 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc" Dec 09 17:50:31 crc kubenswrapper[4853]: E1209 17:50:31.569952 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:50:34 crc kubenswrapper[4853]: I1209 17:50:34.568183 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:50:34 crc kubenswrapper[4853]: E1209 17:50:34.571140 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:50:38 crc kubenswrapper[4853]: E1209 17:50:38.570492 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:50:45 crc kubenswrapper[4853]: I1209 17:50:45.567744 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:50:45 crc kubenswrapper[4853]: E1209 17:50:45.568525 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:50:45 crc kubenswrapper[4853]: E1209 17:50:45.569330 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:50:49 crc kubenswrapper[4853]: E1209 17:50:49.571239 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:50:58 crc kubenswrapper[4853]: E1209 17:50:58.571452 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:50:59 crc kubenswrapper[4853]: I1209 17:50:59.566881 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:50:59 crc kubenswrapper[4853]: E1209 17:50:59.567648 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:51:01 crc kubenswrapper[4853]: E1209 17:51:01.574971 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.051883 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v"] Dec 09 17:51:05 crc kubenswrapper[4853]: E1209 17:51:05.053143 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb130e39-5eaa-4535-b525-70c76835da53" containerName="extract-content" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.053166 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb130e39-5eaa-4535-b525-70c76835da53" containerName="extract-content" Dec 09 17:51:05 crc kubenswrapper[4853]: E1209 17:51:05.053211 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb130e39-5eaa-4535-b525-70c76835da53" containerName="extract-utilities" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.053226 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb130e39-5eaa-4535-b525-70c76835da53" containerName="extract-utilities" Dec 09 17:51:05 crc kubenswrapper[4853]: E1209 17:51:05.053293 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb130e39-5eaa-4535-b525-70c76835da53" containerName="registry-server" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.053307 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb130e39-5eaa-4535-b525-70c76835da53" containerName="registry-server" Dec 09 17:51:05 crc kubenswrapper[4853]: E1209 17:51:05.053349 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6917b95-0219-402f-8309-76ad558f9756" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.053362 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6917b95-0219-402f-8309-76ad558f9756" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.053765 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb130e39-5eaa-4535-b525-70c76835da53" containerName="registry-server" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.053819 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6917b95-0219-402f-8309-76ad558f9756" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.055409 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.058148 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.058363 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.058571 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.067822 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.070412 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v"] Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.155851 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.155986 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.156058 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tht\" (UniqueName: \"kubernetes.io/projected/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-kube-api-access-f4tht\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.257825 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.257912 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4tht\" (UniqueName: \"kubernetes.io/projected/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-kube-api-access-f4tht\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.258008 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.264642 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.266589 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.283242 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4tht\" (UniqueName: \"kubernetes.io/projected/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-kube-api-access-f4tht\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:05 crc kubenswrapper[4853]: I1209 17:51:05.408276 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:51:06 crc kubenswrapper[4853]: I1209 17:51:06.056377 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v"] Dec 09 17:51:06 crc kubenswrapper[4853]: I1209 17:51:06.317640 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" event={"ID":"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9","Type":"ContainerStarted","Data":"43f84ba043198bf7b536c7888b03eedeb476da965211996f9e608d05036275f8"} Dec 09 17:51:07 crc kubenswrapper[4853]: I1209 17:51:07.334961 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" event={"ID":"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9","Type":"ContainerStarted","Data":"bdb73fd826429b157738243352aa6723cc8f04ea038492b6c0a2e9f552d8baca"} Dec 09 17:51:07 crc kubenswrapper[4853]: I1209 17:51:07.380910 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" podStartSLOduration=1.914618748 podStartE2EDuration="2.380890332s" podCreationTimestamp="2025-12-09 17:51:05 +0000 UTC" firstStartedPulling="2025-12-09 17:51:06.061541134 +0000 UTC m=+3292.996280326" lastFinishedPulling="2025-12-09 17:51:06.527812728 +0000 UTC m=+3293.462551910" observedRunningTime="2025-12-09 17:51:07.361193411 +0000 UTC m=+3294.295932593" watchObservedRunningTime="2025-12-09 17:51:07.380890332 +0000 UTC m=+3294.315629514" Dec 09 17:51:11 crc kubenswrapper[4853]: I1209 17:51:11.571721 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:51:11 crc kubenswrapper[4853]: E1209 17:51:11.706979 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:51:11 crc kubenswrapper[4853]: E1209 17:51:11.707056 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:51:11 crc kubenswrapper[4853]: E1209 17:51:11.707232 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:51:11 crc kubenswrapper[4853]: E1209 17:51:11.709028 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:51:13 crc kubenswrapper[4853]: E1209 17:51:13.592003 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:51:14 crc kubenswrapper[4853]: I1209 17:51:14.568591 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:51:14 crc kubenswrapper[4853]: E1209 17:51:14.569140 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:51:24 crc kubenswrapper[4853]: E1209 17:51:24.569200 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:51:25 crc kubenswrapper[4853]: E1209 17:51:25.691673 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:51:25 crc kubenswrapper[4853]: E1209 17:51:25.691976 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:51:25 crc kubenswrapper[4853]: E1209 17:51:25.692125 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:51:25 crc kubenswrapper[4853]: E1209 17:51:25.693317 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:51:27 crc kubenswrapper[4853]: I1209 17:51:27.567856 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:51:27 crc kubenswrapper[4853]: E1209 17:51:27.568427 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:51:31 crc kubenswrapper[4853]: I1209 17:51:31.941333 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s2nsl"] Dec 09 17:51:31 crc kubenswrapper[4853]: I1209 17:51:31.944444 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:31 crc kubenswrapper[4853]: I1209 17:51:31.982180 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2nsl"] Dec 09 17:51:31 crc kubenswrapper[4853]: I1209 17:51:31.997535 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83a278e1-13a8-4205-b2b1-74f168c7f2ac-catalog-content\") pod \"certified-operators-s2nsl\" (UID: \"83a278e1-13a8-4205-b2b1-74f168c7f2ac\") " pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:31 crc kubenswrapper[4853]: I1209 17:51:31.997740 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btlnj\" (UniqueName: \"kubernetes.io/projected/83a278e1-13a8-4205-b2b1-74f168c7f2ac-kube-api-access-btlnj\") pod \"certified-operators-s2nsl\" (UID: \"83a278e1-13a8-4205-b2b1-74f168c7f2ac\") " pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:31 crc kubenswrapper[4853]: I1209 17:51:31.997872 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83a278e1-13a8-4205-b2b1-74f168c7f2ac-utilities\") pod \"certified-operators-s2nsl\" (UID: \"83a278e1-13a8-4205-b2b1-74f168c7f2ac\") " pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:32 crc kubenswrapper[4853]: I1209 17:51:32.100427 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83a278e1-13a8-4205-b2b1-74f168c7f2ac-catalog-content\") pod \"certified-operators-s2nsl\" (UID: \"83a278e1-13a8-4205-b2b1-74f168c7f2ac\") " pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:32 crc kubenswrapper[4853]: I1209 17:51:32.100509 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btlnj\" (UniqueName: \"kubernetes.io/projected/83a278e1-13a8-4205-b2b1-74f168c7f2ac-kube-api-access-btlnj\") pod \"certified-operators-s2nsl\" (UID: \"83a278e1-13a8-4205-b2b1-74f168c7f2ac\") " pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:32 crc kubenswrapper[4853]: I1209 17:51:32.100647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83a278e1-13a8-4205-b2b1-74f168c7f2ac-utilities\") pod \"certified-operators-s2nsl\" (UID: \"83a278e1-13a8-4205-b2b1-74f168c7f2ac\") " pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:32 crc kubenswrapper[4853]: I1209 17:51:32.101095 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83a278e1-13a8-4205-b2b1-74f168c7f2ac-catalog-content\") pod \"certified-operators-s2nsl\" (UID: \"83a278e1-13a8-4205-b2b1-74f168c7f2ac\") " pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:32 crc kubenswrapper[4853]: I1209 17:51:32.101399 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83a278e1-13a8-4205-b2b1-74f168c7f2ac-utilities\") pod \"certified-operators-s2nsl\" (UID: \"83a278e1-13a8-4205-b2b1-74f168c7f2ac\") " pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:32 crc kubenswrapper[4853]: I1209 17:51:32.126430 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btlnj\" (UniqueName: \"kubernetes.io/projected/83a278e1-13a8-4205-b2b1-74f168c7f2ac-kube-api-access-btlnj\") pod \"certified-operators-s2nsl\" (UID: \"83a278e1-13a8-4205-b2b1-74f168c7f2ac\") " pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:32 crc kubenswrapper[4853]: I1209 17:51:32.278118 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:32 crc kubenswrapper[4853]: I1209 17:51:32.833435 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2nsl"] Dec 09 17:51:33 crc kubenswrapper[4853]: I1209 17:51:33.670503 4853 generic.go:334] "Generic (PLEG): container finished" podID="83a278e1-13a8-4205-b2b1-74f168c7f2ac" containerID="c2064721ddf6f79ea31dd7da17dfe32bd4356f3b31abf8b6bf74ac4ccb9c797d" exitCode=0 Dec 09 17:51:33 crc kubenswrapper[4853]: I1209 17:51:33.670791 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2nsl" event={"ID":"83a278e1-13a8-4205-b2b1-74f168c7f2ac","Type":"ContainerDied","Data":"c2064721ddf6f79ea31dd7da17dfe32bd4356f3b31abf8b6bf74ac4ccb9c797d"} Dec 09 17:51:33 crc kubenswrapper[4853]: I1209 17:51:33.671081 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2nsl" event={"ID":"83a278e1-13a8-4205-b2b1-74f168c7f2ac","Type":"ContainerStarted","Data":"8219234eed284b78f4545ab7bb1b57d3ba3d75a21596a921d00f466b790b7d58"} Dec 09 17:51:35 crc kubenswrapper[4853]: E1209 17:51:35.571972 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:51:38 crc kubenswrapper[4853]: E1209 17:51:38.574335 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:51:39 crc kubenswrapper[4853]: I1209 17:51:39.567612 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:51:39 crc kubenswrapper[4853]: E1209 17:51:39.568213 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:51:40 crc kubenswrapper[4853]: I1209 17:51:40.764924 4853 generic.go:334] "Generic (PLEG): container finished" podID="83a278e1-13a8-4205-b2b1-74f168c7f2ac" containerID="bea18206b4948471fe09dc915ab356f77255a57fdf64be6b4871f8730f8271d8" exitCode=0 Dec 09 17:51:40 crc kubenswrapper[4853]: I1209 17:51:40.764989 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2nsl" event={"ID":"83a278e1-13a8-4205-b2b1-74f168c7f2ac","Type":"ContainerDied","Data":"bea18206b4948471fe09dc915ab356f77255a57fdf64be6b4871f8730f8271d8"} Dec 09 17:51:41 crc kubenswrapper[4853]: I1209 17:51:41.780125 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2nsl" event={"ID":"83a278e1-13a8-4205-b2b1-74f168c7f2ac","Type":"ContainerStarted","Data":"6401eb7a4a6ced2ecfca683593b1ef64acc6acbc4972f92c6d6d2e55ccb9346a"} Dec 09 17:51:41 crc kubenswrapper[4853]: I1209 17:51:41.811814 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s2nsl" podStartSLOduration=3.240421157 podStartE2EDuration="10.811794467s" podCreationTimestamp="2025-12-09 17:51:31 +0000 UTC" firstStartedPulling="2025-12-09 17:51:33.672986125 +0000 UTC m=+3320.607725307" lastFinishedPulling="2025-12-09 17:51:41.244359435 +0000 UTC m=+3328.179098617" observedRunningTime="2025-12-09 17:51:41.79918488 +0000 UTC m=+3328.733924062" watchObservedRunningTime="2025-12-09 17:51:41.811794467 +0000 UTC m=+3328.746533649" Dec 09 17:51:42 crc kubenswrapper[4853]: I1209 17:51:42.278394 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:42 crc kubenswrapper[4853]: I1209 17:51:42.278448 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:43 crc kubenswrapper[4853]: I1209 17:51:43.327461 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-s2nsl" podUID="83a278e1-13a8-4205-b2b1-74f168c7f2ac" containerName="registry-server" probeResult="failure" output=< Dec 09 17:51:43 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 17:51:43 crc kubenswrapper[4853]: > Dec 09 17:51:46 crc kubenswrapper[4853]: E1209 17:51:46.569558 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:51:50 crc kubenswrapper[4853]: I1209 17:51:50.567382 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:51:50 crc kubenswrapper[4853]: E1209 17:51:50.568297 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:51:52 crc kubenswrapper[4853]: I1209 17:51:52.369059 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:52 crc kubenswrapper[4853]: I1209 17:51:52.468703 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s2nsl" Dec 09 17:51:52 crc kubenswrapper[4853]: I1209 17:51:52.572004 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2nsl"] Dec 09 17:51:52 crc kubenswrapper[4853]: I1209 17:51:52.638859 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n5zk8"] Dec 09 17:51:52 crc kubenswrapper[4853]: I1209 17:51:52.639211 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n5zk8" podUID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerName="registry-server" containerID="cri-o://899dc8d2628b6f674c4a5e35584c0bf6eb59945fba52c8123f75d4590132ec45" gracePeriod=2 Dec 09 17:51:52 crc kubenswrapper[4853]: I1209 17:51:52.915676 4853 generic.go:334] "Generic (PLEG): container finished" podID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerID="899dc8d2628b6f674c4a5e35584c0bf6eb59945fba52c8123f75d4590132ec45" exitCode=0 Dec 09 17:51:52 crc kubenswrapper[4853]: I1209 17:51:52.915820 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5zk8" event={"ID":"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c","Type":"ContainerDied","Data":"899dc8d2628b6f674c4a5e35584c0bf6eb59945fba52c8123f75d4590132ec45"} Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.191078 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.229839 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q98np\" (UniqueName: \"kubernetes.io/projected/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-kube-api-access-q98np\") pod \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.230006 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-utilities\") pod \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.230128 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-catalog-content\") pod \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\" (UID: \"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c\") " Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.231205 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-utilities" (OuterVolumeSpecName: "utilities") pod "f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" (UID: "f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.253020 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-kube-api-access-q98np" (OuterVolumeSpecName: "kube-api-access-q98np") pod "f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" (UID: "f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c"). InnerVolumeSpecName "kube-api-access-q98np". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.290986 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" (UID: "f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.349051 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.349093 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.349107 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q98np\" (UniqueName: \"kubernetes.io/projected/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c-kube-api-access-q98np\") on node \"crc\" DevicePath \"\"" Dec 09 17:51:53 crc kubenswrapper[4853]: E1209 17:51:53.585242 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.928055 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5zk8" event={"ID":"f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c","Type":"ContainerDied","Data":"d0dc52ae573369dfc8989bac45a61ff92b65f2712c5d832cafe8633d2dfbfc27"} Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.928123 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5zk8" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.928135 4853 scope.go:117] "RemoveContainer" containerID="899dc8d2628b6f674c4a5e35584c0bf6eb59945fba52c8123f75d4590132ec45" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.962757 4853 scope.go:117] "RemoveContainer" containerID="5a57adc92b122e31fb0a33408e24dbecef13eb409bdc233f852b8d356fa838d4" Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.962926 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n5zk8"] Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.978025 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n5zk8"] Dec 09 17:51:53 crc kubenswrapper[4853]: I1209 17:51:53.991829 4853 scope.go:117] "RemoveContainer" containerID="c53dcd44f47d1f3234df523ee9bfa879ed4f81c686b144e199245519d9ccf779" Dec 09 17:51:55 crc kubenswrapper[4853]: I1209 17:51:55.585840 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" path="/var/lib/kubelet/pods/f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c/volumes" Dec 09 17:51:59 crc kubenswrapper[4853]: E1209 17:51:59.569096 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:52:03 crc kubenswrapper[4853]: I1209 17:52:03.567311 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:52:04 crc kubenswrapper[4853]: I1209 17:52:04.051858 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"0486b0a7024d73699304efb8f2a3cd14d328e805bf023a4b1898d03448d1cb83"} Dec 09 17:52:06 crc kubenswrapper[4853]: E1209 17:52:06.568785 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:52:10 crc kubenswrapper[4853]: E1209 17:52:10.576952 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:52:17 crc kubenswrapper[4853]: E1209 17:52:17.570486 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:52:21 crc kubenswrapper[4853]: E1209 17:52:21.571987 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:52:31 crc kubenswrapper[4853]: E1209 17:52:31.569821 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:52:32 crc kubenswrapper[4853]: E1209 17:52:32.568842 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:52:46 crc kubenswrapper[4853]: E1209 17:52:46.569789 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:52:47 crc kubenswrapper[4853]: E1209 17:52:47.569402 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:52:58 crc kubenswrapper[4853]: E1209 17:52:58.570484 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:52:58 crc kubenswrapper[4853]: E1209 17:52:58.572966 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:53:10 crc kubenswrapper[4853]: E1209 17:53:10.572926 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:53:12 crc kubenswrapper[4853]: E1209 17:53:12.570080 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:53:22 crc kubenswrapper[4853]: E1209 17:53:22.570765 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:53:27 crc kubenswrapper[4853]: E1209 17:53:27.571169 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:53:33 crc kubenswrapper[4853]: E1209 17:53:33.597804 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:53:42 crc kubenswrapper[4853]: E1209 17:53:42.569276 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:53:44 crc kubenswrapper[4853]: E1209 17:53:44.569992 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:53:56 crc kubenswrapper[4853]: E1209 17:53:56.570502 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:53:57 crc kubenswrapper[4853]: E1209 17:53:57.568933 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:54:08 crc kubenswrapper[4853]: E1209 17:54:08.572322 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:54:11 crc kubenswrapper[4853]: E1209 17:54:11.573057 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:54:20 crc kubenswrapper[4853]: E1209 17:54:20.569534 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:54:24 crc kubenswrapper[4853]: E1209 17:54:24.569826 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:54:28 crc kubenswrapper[4853]: I1209 17:54:28.593382 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:54:28 crc kubenswrapper[4853]: I1209 17:54:28.593999 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:54:33 crc kubenswrapper[4853]: E1209 17:54:33.580480 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:54:39 crc kubenswrapper[4853]: E1209 17:54:39.570311 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:54:46 crc kubenswrapper[4853]: E1209 17:54:46.570795 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:54:50 crc kubenswrapper[4853]: E1209 17:54:50.574209 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:54:57 crc kubenswrapper[4853]: E1209 17:54:57.570402 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:54:58 crc kubenswrapper[4853]: I1209 17:54:58.593463 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:54:58 crc kubenswrapper[4853]: I1209 17:54:58.594356 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:55:03 crc kubenswrapper[4853]: E1209 17:55:03.578967 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:55:12 crc kubenswrapper[4853]: E1209 17:55:12.571193 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:55:16 crc kubenswrapper[4853]: E1209 17:55:16.569942 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:55:26 crc kubenswrapper[4853]: E1209 17:55:26.571248 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:55:28 crc kubenswrapper[4853]: I1209 17:55:28.592818 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:55:28 crc kubenswrapper[4853]: I1209 17:55:28.593278 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:55:28 crc kubenswrapper[4853]: I1209 17:55:28.593334 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:55:28 crc kubenswrapper[4853]: I1209 17:55:28.594572 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0486b0a7024d73699304efb8f2a3cd14d328e805bf023a4b1898d03448d1cb83"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:55:28 crc kubenswrapper[4853]: I1209 17:55:28.594669 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://0486b0a7024d73699304efb8f2a3cd14d328e805bf023a4b1898d03448d1cb83" gracePeriod=600 Dec 09 17:55:29 crc kubenswrapper[4853]: I1209 17:55:29.517183 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="0486b0a7024d73699304efb8f2a3cd14d328e805bf023a4b1898d03448d1cb83" exitCode=0 Dec 09 17:55:29 crc kubenswrapper[4853]: I1209 17:55:29.517280 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"0486b0a7024d73699304efb8f2a3cd14d328e805bf023a4b1898d03448d1cb83"} Dec 09 17:55:29 crc kubenswrapper[4853]: I1209 17:55:29.517772 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36"} Dec 09 17:55:29 crc kubenswrapper[4853]: I1209 17:55:29.517811 4853 scope.go:117] "RemoveContainer" containerID="22fb9f8cbf4ec4b57600d0f69afc8de08f91cc33c8194abf9ba4036672ce063c" Dec 09 17:55:29 crc kubenswrapper[4853]: E1209 17:55:29.568748 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:55:37 crc kubenswrapper[4853]: E1209 17:55:37.572448 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:55:42 crc kubenswrapper[4853]: E1209 17:55:42.569784 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:55:49 crc kubenswrapper[4853]: E1209 17:55:49.569907 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:55:54 crc kubenswrapper[4853]: E1209 17:55:54.569824 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:56:03 crc kubenswrapper[4853]: E1209 17:56:03.576396 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:56:06 crc kubenswrapper[4853]: E1209 17:56:06.571124 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:56:15 crc kubenswrapper[4853]: E1209 17:56:15.570522 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:56:21 crc kubenswrapper[4853]: I1209 17:56:21.572198 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 17:56:21 crc kubenswrapper[4853]: E1209 17:56:21.702186 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:56:21 crc kubenswrapper[4853]: E1209 17:56:21.702260 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 17:56:21 crc kubenswrapper[4853]: E1209 17:56:21.702444 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:56:21 crc kubenswrapper[4853]: E1209 17:56:21.703572 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:56:27 crc kubenswrapper[4853]: E1209 17:56:27.661659 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:56:27 crc kubenswrapper[4853]: E1209 17:56:27.662247 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 17:56:27 crc kubenswrapper[4853]: E1209 17:56:27.662437 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 17:56:27 crc kubenswrapper[4853]: E1209 17:56:27.663587 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:56:35 crc kubenswrapper[4853]: E1209 17:56:35.570707 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:56:41 crc kubenswrapper[4853]: E1209 17:56:41.571472 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:56:48 crc kubenswrapper[4853]: E1209 17:56:48.574045 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:56:52 crc kubenswrapper[4853]: E1209 17:56:52.573987 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:56:59 crc kubenswrapper[4853]: E1209 17:56:59.579295 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:57:03 crc kubenswrapper[4853]: E1209 17:57:03.584327 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:57:14 crc kubenswrapper[4853]: E1209 17:57:14.570757 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:57:18 crc kubenswrapper[4853]: E1209 17:57:18.570121 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:57:26 crc kubenswrapper[4853]: E1209 17:57:26.580142 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:57:27 crc kubenswrapper[4853]: I1209 17:57:27.944731 4853 generic.go:334] "Generic (PLEG): container finished" podID="4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9" containerID="bdb73fd826429b157738243352aa6723cc8f04ea038492b6c0a2e9f552d8baca" exitCode=2 Dec 09 17:57:27 crc kubenswrapper[4853]: I1209 17:57:27.944823 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" event={"ID":"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9","Type":"ContainerDied","Data":"bdb73fd826429b157738243352aa6723cc8f04ea038492b6c0a2e9f552d8baca"} Dec 09 17:57:28 crc kubenswrapper[4853]: I1209 17:57:28.593259 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:57:28 crc kubenswrapper[4853]: I1209 17:57:28.593349 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.499362 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.607519 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-inventory\") pod \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.607671 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-ssh-key\") pod \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.607713 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4tht\" (UniqueName: \"kubernetes.io/projected/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-kube-api-access-f4tht\") pod \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\" (UID: \"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9\") " Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.613561 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-kube-api-access-f4tht" (OuterVolumeSpecName: "kube-api-access-f4tht") pod "4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9" (UID: "4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9"). InnerVolumeSpecName "kube-api-access-f4tht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.639066 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9" (UID: "4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.648010 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-inventory" (OuterVolumeSpecName: "inventory") pod "4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9" (UID: "4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.710957 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.710987 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.710998 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4tht\" (UniqueName: \"kubernetes.io/projected/4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9-kube-api-access-f4tht\") on node \"crc\" DevicePath \"\"" Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.971717 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" event={"ID":"4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9","Type":"ContainerDied","Data":"43f84ba043198bf7b536c7888b03eedeb476da965211996f9e608d05036275f8"} Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.971884 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v" Dec 09 17:57:29 crc kubenswrapper[4853]: I1209 17:57:29.971781 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43f84ba043198bf7b536c7888b03eedeb476da965211996f9e608d05036275f8" Dec 09 17:57:33 crc kubenswrapper[4853]: E1209 17:57:33.580744 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:57:37 crc kubenswrapper[4853]: E1209 17:57:37.569843 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:57:45 crc kubenswrapper[4853]: E1209 17:57:45.569943 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:57:48 crc kubenswrapper[4853]: E1209 17:57:48.570494 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:57:58 crc kubenswrapper[4853]: E1209 17:57:58.589840 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:57:58 crc kubenswrapper[4853]: I1209 17:57:58.594183 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:57:58 crc kubenswrapper[4853]: I1209 17:57:58.594399 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:58:00 crc kubenswrapper[4853]: E1209 17:58:00.571450 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:58:11 crc kubenswrapper[4853]: E1209 17:58:11.570504 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:58:13 crc kubenswrapper[4853]: E1209 17:58:13.579805 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:58:22 crc kubenswrapper[4853]: E1209 17:58:22.569802 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:58:27 crc kubenswrapper[4853]: E1209 17:58:27.571031 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:58:28 crc kubenswrapper[4853]: I1209 17:58:28.593364 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 17:58:28 crc kubenswrapper[4853]: I1209 17:58:28.593474 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 17:58:28 crc kubenswrapper[4853]: I1209 17:58:28.593534 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 17:58:28 crc kubenswrapper[4853]: I1209 17:58:28.594522 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 17:58:28 crc kubenswrapper[4853]: I1209 17:58:28.594623 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" gracePeriod=600 Dec 09 17:58:28 crc kubenswrapper[4853]: E1209 17:58:28.727552 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:58:29 crc kubenswrapper[4853]: I1209 17:58:29.677041 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" exitCode=0 Dec 09 17:58:29 crc kubenswrapper[4853]: I1209 17:58:29.677110 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36"} Dec 09 17:58:29 crc kubenswrapper[4853]: I1209 17:58:29.677364 4853 scope.go:117] "RemoveContainer" containerID="0486b0a7024d73699304efb8f2a3cd14d328e805bf023a4b1898d03448d1cb83" Dec 09 17:58:29 crc kubenswrapper[4853]: I1209 17:58:29.678427 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 17:58:29 crc kubenswrapper[4853]: E1209 17:58:29.678997 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:58:36 crc kubenswrapper[4853]: E1209 17:58:36.569492 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.471065 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kcb9g"] Dec 09 17:58:40 crc kubenswrapper[4853]: E1209 17:58:40.472090 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.472108 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:58:40 crc kubenswrapper[4853]: E1209 17:58:40.472174 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerName="extract-utilities" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.472183 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerName="extract-utilities" Dec 09 17:58:40 crc kubenswrapper[4853]: E1209 17:58:40.472222 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerName="extract-content" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.472230 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerName="extract-content" Dec 09 17:58:40 crc kubenswrapper[4853]: E1209 17:58:40.472256 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerName="registry-server" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.472265 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerName="registry-server" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.472525 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c2a5d6-d301-4cc0-b4d0-1e53e99aaf5c" containerName="registry-server" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.472544 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.474822 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.487786 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcb9g"] Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.593086 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-utilities\") pod \"redhat-marketplace-kcb9g\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.593150 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrk48\" (UniqueName: \"kubernetes.io/projected/51b76e3e-830c-4415-92ee-bfa4956f6688-kube-api-access-qrk48\") pod \"redhat-marketplace-kcb9g\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.593236 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-catalog-content\") pod \"redhat-marketplace-kcb9g\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.695374 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-utilities\") pod \"redhat-marketplace-kcb9g\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.695493 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrk48\" (UniqueName: \"kubernetes.io/projected/51b76e3e-830c-4415-92ee-bfa4956f6688-kube-api-access-qrk48\") pod \"redhat-marketplace-kcb9g\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.695573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-catalog-content\") pod \"redhat-marketplace-kcb9g\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.695990 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-utilities\") pod \"redhat-marketplace-kcb9g\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.696075 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-catalog-content\") pod \"redhat-marketplace-kcb9g\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.717831 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrk48\" (UniqueName: \"kubernetes.io/projected/51b76e3e-830c-4415-92ee-bfa4956f6688-kube-api-access-qrk48\") pod \"redhat-marketplace-kcb9g\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:40 crc kubenswrapper[4853]: I1209 17:58:40.806969 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:41 crc kubenswrapper[4853]: I1209 17:58:41.367839 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcb9g"] Dec 09 17:58:41 crc kubenswrapper[4853]: I1209 17:58:41.831162 4853 generic.go:334] "Generic (PLEG): container finished" podID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerID="87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5" exitCode=0 Dec 09 17:58:41 crc kubenswrapper[4853]: I1209 17:58:41.831216 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcb9g" event={"ID":"51b76e3e-830c-4415-92ee-bfa4956f6688","Type":"ContainerDied","Data":"87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5"} Dec 09 17:58:41 crc kubenswrapper[4853]: I1209 17:58:41.831244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcb9g" event={"ID":"51b76e3e-830c-4415-92ee-bfa4956f6688","Type":"ContainerStarted","Data":"a75862bc0f4a5c59f5a0e71545f8ffed248a4f1a0f4bf12a872f468bf46f155b"} Dec 09 17:58:42 crc kubenswrapper[4853]: E1209 17:58:42.569566 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.270784 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k82w9"] Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.273845 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.282544 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k82w9"] Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.364557 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blzfg\" (UniqueName: \"kubernetes.io/projected/63475b77-8915-4566-844c-7d9ebd035fa8-kube-api-access-blzfg\") pod \"community-operators-k82w9\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.364662 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-catalog-content\") pod \"community-operators-k82w9\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.364854 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-utilities\") pod \"community-operators-k82w9\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.467686 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blzfg\" (UniqueName: \"kubernetes.io/projected/63475b77-8915-4566-844c-7d9ebd035fa8-kube-api-access-blzfg\") pod \"community-operators-k82w9\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.467784 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-catalog-content\") pod \"community-operators-k82w9\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.468247 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-catalog-content\") pod \"community-operators-k82w9\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.468434 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-utilities\") pod \"community-operators-k82w9\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.468722 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-utilities\") pod \"community-operators-k82w9\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.489405 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blzfg\" (UniqueName: \"kubernetes.io/projected/63475b77-8915-4566-844c-7d9ebd035fa8-kube-api-access-blzfg\") pod \"community-operators-k82w9\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.574945 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 17:58:43 crc kubenswrapper[4853]: E1209 17:58:43.575233 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:58:43 crc kubenswrapper[4853]: I1209 17:58:43.604877 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:44 crc kubenswrapper[4853]: I1209 17:58:44.179388 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k82w9"] Dec 09 17:58:44 crc kubenswrapper[4853]: I1209 17:58:44.873121 4853 generic.go:334] "Generic (PLEG): container finished" podID="63475b77-8915-4566-844c-7d9ebd035fa8" containerID="2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e" exitCode=0 Dec 09 17:58:44 crc kubenswrapper[4853]: I1209 17:58:44.873221 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k82w9" event={"ID":"63475b77-8915-4566-844c-7d9ebd035fa8","Type":"ContainerDied","Data":"2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e"} Dec 09 17:58:44 crc kubenswrapper[4853]: I1209 17:58:44.873244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k82w9" event={"ID":"63475b77-8915-4566-844c-7d9ebd035fa8","Type":"ContainerStarted","Data":"6ea291f66c2a3f2108ab227f10dae1af325334a9912bd02b026c48eca41b1a3b"} Dec 09 17:58:44 crc kubenswrapper[4853]: I1209 17:58:44.876277 4853 generic.go:334] "Generic (PLEG): container finished" podID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerID="6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae" exitCode=0 Dec 09 17:58:44 crc kubenswrapper[4853]: I1209 17:58:44.876318 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcb9g" event={"ID":"51b76e3e-830c-4415-92ee-bfa4956f6688","Type":"ContainerDied","Data":"6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae"} Dec 09 17:58:45 crc kubenswrapper[4853]: I1209 17:58:45.885791 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k82w9" event={"ID":"63475b77-8915-4566-844c-7d9ebd035fa8","Type":"ContainerStarted","Data":"74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a"} Dec 09 17:58:45 crc kubenswrapper[4853]: I1209 17:58:45.889644 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcb9g" event={"ID":"51b76e3e-830c-4415-92ee-bfa4956f6688","Type":"ContainerStarted","Data":"2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e"} Dec 09 17:58:45 crc kubenswrapper[4853]: I1209 17:58:45.980069 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kcb9g" podStartSLOduration=2.461643867 podStartE2EDuration="5.980047779s" podCreationTimestamp="2025-12-09 17:58:40 +0000 UTC" firstStartedPulling="2025-12-09 17:58:41.83678132 +0000 UTC m=+3748.771520502" lastFinishedPulling="2025-12-09 17:58:45.355185222 +0000 UTC m=+3752.289924414" observedRunningTime="2025-12-09 17:58:45.976158882 +0000 UTC m=+3752.910898064" watchObservedRunningTime="2025-12-09 17:58:45.980047779 +0000 UTC m=+3752.914786971" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.040961 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x"] Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.043288 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.047127 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.047191 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.047408 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.047817 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.068527 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x"] Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.188329 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h495n\" (UniqueName: \"kubernetes.io/projected/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-kube-api-access-h495n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.188531 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.188640 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.290709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.290774 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h495n\" (UniqueName: \"kubernetes.io/projected/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-kube-api-access-h495n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.290934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.298052 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.302777 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.310968 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h495n\" (UniqueName: \"kubernetes.io/projected/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-kube-api-access-h495n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.365371 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.920133 4853 generic.go:334] "Generic (PLEG): container finished" podID="63475b77-8915-4566-844c-7d9ebd035fa8" containerID="74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a" exitCode=0 Dec 09 17:58:47 crc kubenswrapper[4853]: I1209 17:58:47.920208 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k82w9" event={"ID":"63475b77-8915-4566-844c-7d9ebd035fa8","Type":"ContainerDied","Data":"74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a"} Dec 09 17:58:48 crc kubenswrapper[4853]: I1209 17:58:48.026630 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x"] Dec 09 17:58:48 crc kubenswrapper[4853]: I1209 17:58:48.931653 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k82w9" event={"ID":"63475b77-8915-4566-844c-7d9ebd035fa8","Type":"ContainerStarted","Data":"7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97"} Dec 09 17:58:48 crc kubenswrapper[4853]: I1209 17:58:48.933014 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" event={"ID":"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c","Type":"ContainerStarted","Data":"0df72078dd3b4441c2a1f8e64920911514451eee19e46c31eac83d34f5f18791"} Dec 09 17:58:48 crc kubenswrapper[4853]: I1209 17:58:48.933058 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" event={"ID":"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c","Type":"ContainerStarted","Data":"d843e04724a3d5aa7144f597e0f02f0cf79888e67042e8ad2f124666f73649cd"} Dec 09 17:58:48 crc kubenswrapper[4853]: I1209 17:58:48.957432 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k82w9" podStartSLOduration=2.320860461 podStartE2EDuration="5.957414203s" podCreationTimestamp="2025-12-09 17:58:43 +0000 UTC" firstStartedPulling="2025-12-09 17:58:44.874922201 +0000 UTC m=+3751.809661393" lastFinishedPulling="2025-12-09 17:58:48.511475953 +0000 UTC m=+3755.446215135" observedRunningTime="2025-12-09 17:58:48.955276914 +0000 UTC m=+3755.890016116" watchObservedRunningTime="2025-12-09 17:58:48.957414203 +0000 UTC m=+3755.892153385" Dec 09 17:58:48 crc kubenswrapper[4853]: I1209 17:58:48.984405 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" podStartSLOduration=1.568556018 podStartE2EDuration="1.984386243s" podCreationTimestamp="2025-12-09 17:58:47 +0000 UTC" firstStartedPulling="2025-12-09 17:58:48.024470917 +0000 UTC m=+3754.959210099" lastFinishedPulling="2025-12-09 17:58:48.440301142 +0000 UTC m=+3755.375040324" observedRunningTime="2025-12-09 17:58:48.977185185 +0000 UTC m=+3755.911924377" watchObservedRunningTime="2025-12-09 17:58:48.984386243 +0000 UTC m=+3755.919125425" Dec 09 17:58:49 crc kubenswrapper[4853]: E1209 17:58:49.568582 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:58:50 crc kubenswrapper[4853]: I1209 17:58:50.807805 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:50 crc kubenswrapper[4853]: I1209 17:58:50.808123 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:50 crc kubenswrapper[4853]: I1209 17:58:50.861966 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:51 crc kubenswrapper[4853]: I1209 17:58:51.018397 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:51 crc kubenswrapper[4853]: I1209 17:58:51.659851 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcb9g"] Dec 09 17:58:52 crc kubenswrapper[4853]: I1209 17:58:52.994138 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kcb9g" podUID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerName="registry-server" containerID="cri-o://2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e" gracePeriod=2 Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.563777 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:53 crc kubenswrapper[4853]: E1209 17:58:53.570178 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.605633 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.605686 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.649307 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-catalog-content\") pod \"51b76e3e-830c-4415-92ee-bfa4956f6688\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.649361 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-utilities\") pod \"51b76e3e-830c-4415-92ee-bfa4956f6688\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.649418 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrk48\" (UniqueName: \"kubernetes.io/projected/51b76e3e-830c-4415-92ee-bfa4956f6688-kube-api-access-qrk48\") pod \"51b76e3e-830c-4415-92ee-bfa4956f6688\" (UID: \"51b76e3e-830c-4415-92ee-bfa4956f6688\") " Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.651498 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-utilities" (OuterVolumeSpecName: "utilities") pod "51b76e3e-830c-4415-92ee-bfa4956f6688" (UID: "51b76e3e-830c-4415-92ee-bfa4956f6688"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.659976 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b76e3e-830c-4415-92ee-bfa4956f6688-kube-api-access-qrk48" (OuterVolumeSpecName: "kube-api-access-qrk48") pod "51b76e3e-830c-4415-92ee-bfa4956f6688" (UID: "51b76e3e-830c-4415-92ee-bfa4956f6688"). InnerVolumeSpecName "kube-api-access-qrk48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.663345 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.671656 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51b76e3e-830c-4415-92ee-bfa4956f6688" (UID: "51b76e3e-830c-4415-92ee-bfa4956f6688"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.752842 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.752874 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b76e3e-830c-4415-92ee-bfa4956f6688-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:58:53 crc kubenswrapper[4853]: I1209 17:58:53.752883 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrk48\" (UniqueName: \"kubernetes.io/projected/51b76e3e-830c-4415-92ee-bfa4956f6688-kube-api-access-qrk48\") on node \"crc\" DevicePath \"\"" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.007553 4853 generic.go:334] "Generic (PLEG): container finished" podID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerID="2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e" exitCode=0 Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.007632 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcb9g" event={"ID":"51b76e3e-830c-4415-92ee-bfa4956f6688","Type":"ContainerDied","Data":"2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e"} Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.008016 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcb9g" event={"ID":"51b76e3e-830c-4415-92ee-bfa4956f6688","Type":"ContainerDied","Data":"a75862bc0f4a5c59f5a0e71545f8ffed248a4f1a0f4bf12a872f468bf46f155b"} Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.008041 4853 scope.go:117] "RemoveContainer" containerID="2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.007649 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcb9g" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.049352 4853 scope.go:117] "RemoveContainer" containerID="6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.062238 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcb9g"] Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.076562 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcb9g"] Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.079906 4853 scope.go:117] "RemoveContainer" containerID="87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.088284 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.164210 4853 scope.go:117] "RemoveContainer" containerID="2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e" Dec 09 17:58:54 crc kubenswrapper[4853]: E1209 17:58:54.164962 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e\": container with ID starting with 2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e not found: ID does not exist" containerID="2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.165000 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e"} err="failed to get container status \"2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e\": rpc error: code = NotFound desc = could not find container \"2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e\": container with ID starting with 2cd0acc881506a6abfa0b5b72c61328c64687bcf329ad52e7bd6db9eca1d2a3e not found: ID does not exist" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.165047 4853 scope.go:117] "RemoveContainer" containerID="6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae" Dec 09 17:58:54 crc kubenswrapper[4853]: E1209 17:58:54.165618 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae\": container with ID starting with 6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae not found: ID does not exist" containerID="6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.165652 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae"} err="failed to get container status \"6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae\": rpc error: code = NotFound desc = could not find container \"6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae\": container with ID starting with 6f8cdb8a2ff6f27f7d332b48b2aa2ae52f843600f2f81da37c0481863e09e5ae not found: ID does not exist" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.165677 4853 scope.go:117] "RemoveContainer" containerID="87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5" Dec 09 17:58:54 crc kubenswrapper[4853]: E1209 17:58:54.166012 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5\": container with ID starting with 87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5 not found: ID does not exist" containerID="87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5" Dec 09 17:58:54 crc kubenswrapper[4853]: I1209 17:58:54.166071 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5"} err="failed to get container status \"87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5\": rpc error: code = NotFound desc = could not find container \"87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5\": container with ID starting with 87ce6d75d7b85f3661c23e635d58391693b89eff2474751fdaa42b971da338e5 not found: ID does not exist" Dec 09 17:58:55 crc kubenswrapper[4853]: I1209 17:58:55.567974 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 17:58:55 crc kubenswrapper[4853]: E1209 17:58:55.568542 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:58:55 crc kubenswrapper[4853]: I1209 17:58:55.589386 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b76e3e-830c-4415-92ee-bfa4956f6688" path="/var/lib/kubelet/pods/51b76e3e-830c-4415-92ee-bfa4956f6688/volumes" Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.055754 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k82w9"] Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.056010 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k82w9" podUID="63475b77-8915-4566-844c-7d9ebd035fa8" containerName="registry-server" containerID="cri-o://7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97" gracePeriod=2 Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.571939 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.625132 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-utilities\") pod \"63475b77-8915-4566-844c-7d9ebd035fa8\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.625334 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blzfg\" (UniqueName: \"kubernetes.io/projected/63475b77-8915-4566-844c-7d9ebd035fa8-kube-api-access-blzfg\") pod \"63475b77-8915-4566-844c-7d9ebd035fa8\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.625493 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-catalog-content\") pod \"63475b77-8915-4566-844c-7d9ebd035fa8\" (UID: \"63475b77-8915-4566-844c-7d9ebd035fa8\") " Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.625785 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-utilities" (OuterVolumeSpecName: "utilities") pod "63475b77-8915-4566-844c-7d9ebd035fa8" (UID: "63475b77-8915-4566-844c-7d9ebd035fa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.626188 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.633219 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63475b77-8915-4566-844c-7d9ebd035fa8-kube-api-access-blzfg" (OuterVolumeSpecName: "kube-api-access-blzfg") pod "63475b77-8915-4566-844c-7d9ebd035fa8" (UID: "63475b77-8915-4566-844c-7d9ebd035fa8"). InnerVolumeSpecName "kube-api-access-blzfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.682293 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63475b77-8915-4566-844c-7d9ebd035fa8" (UID: "63475b77-8915-4566-844c-7d9ebd035fa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.728842 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63475b77-8915-4566-844c-7d9ebd035fa8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 17:58:56 crc kubenswrapper[4853]: I1209 17:58:56.728881 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blzfg\" (UniqueName: \"kubernetes.io/projected/63475b77-8915-4566-844c-7d9ebd035fa8-kube-api-access-blzfg\") on node \"crc\" DevicePath \"\"" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.040719 4853 generic.go:334] "Generic (PLEG): container finished" podID="63475b77-8915-4566-844c-7d9ebd035fa8" containerID="7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97" exitCode=0 Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.040788 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k82w9" event={"ID":"63475b77-8915-4566-844c-7d9ebd035fa8","Type":"ContainerDied","Data":"7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97"} Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.041037 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k82w9" event={"ID":"63475b77-8915-4566-844c-7d9ebd035fa8","Type":"ContainerDied","Data":"6ea291f66c2a3f2108ab227f10dae1af325334a9912bd02b026c48eca41b1a3b"} Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.040850 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k82w9" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.041059 4853 scope.go:117] "RemoveContainer" containerID="7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.061854 4853 scope.go:117] "RemoveContainer" containerID="74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.089536 4853 scope.go:117] "RemoveContainer" containerID="2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.099708 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k82w9"] Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.116019 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k82w9"] Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.158814 4853 scope.go:117] "RemoveContainer" containerID="7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97" Dec 09 17:58:57 crc kubenswrapper[4853]: E1209 17:58:57.159247 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97\": container with ID starting with 7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97 not found: ID does not exist" containerID="7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.159289 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97"} err="failed to get container status \"7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97\": rpc error: code = NotFound desc = could not find container \"7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97\": container with ID starting with 7402fe115a5565f3cf96ac4553161e683900b33557d5090f8553a57e738cfd97 not found: ID does not exist" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.159323 4853 scope.go:117] "RemoveContainer" containerID="74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a" Dec 09 17:58:57 crc kubenswrapper[4853]: E1209 17:58:57.159641 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a\": container with ID starting with 74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a not found: ID does not exist" containerID="74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.159687 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a"} err="failed to get container status \"74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a\": rpc error: code = NotFound desc = could not find container \"74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a\": container with ID starting with 74981b237cb9af4e9404807cf35f8573a9375b60b12d9c0287d137eef0a6279a not found: ID does not exist" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.159712 4853 scope.go:117] "RemoveContainer" containerID="2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e" Dec 09 17:58:57 crc kubenswrapper[4853]: E1209 17:58:57.160091 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e\": container with ID starting with 2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e not found: ID does not exist" containerID="2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.160172 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e"} err="failed to get container status \"2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e\": rpc error: code = NotFound desc = could not find container \"2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e\": container with ID starting with 2bf5e6090f6215691a8738d8b389f75d88c5e086ba65770b4cce16acf48f134e not found: ID does not exist" Dec 09 17:58:57 crc kubenswrapper[4853]: I1209 17:58:57.586087 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63475b77-8915-4566-844c-7d9ebd035fa8" path="/var/lib/kubelet/pods/63475b77-8915-4566-844c-7d9ebd035fa8/volumes" Dec 09 17:59:02 crc kubenswrapper[4853]: E1209 17:59:02.570008 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:59:04 crc kubenswrapper[4853]: E1209 17:59:04.570532 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:59:06 crc kubenswrapper[4853]: I1209 17:59:06.567815 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 17:59:06 crc kubenswrapper[4853]: E1209 17:59:06.568934 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:59:14 crc kubenswrapper[4853]: E1209 17:59:14.569845 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:59:17 crc kubenswrapper[4853]: E1209 17:59:17.570375 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:59:21 crc kubenswrapper[4853]: I1209 17:59:21.567812 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 17:59:21 crc kubenswrapper[4853]: E1209 17:59:21.570583 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:59:25 crc kubenswrapper[4853]: E1209 17:59:25.570396 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:59:30 crc kubenswrapper[4853]: E1209 17:59:30.569285 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:59:36 crc kubenswrapper[4853]: I1209 17:59:36.568053 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 17:59:36 crc kubenswrapper[4853]: E1209 17:59:36.568852 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:59:39 crc kubenswrapper[4853]: E1209 17:59:39.569949 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 17:59:45 crc kubenswrapper[4853]: E1209 17:59:45.570738 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 17:59:49 crc kubenswrapper[4853]: I1209 17:59:49.569548 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 17:59:49 crc kubenswrapper[4853]: E1209 17:59:49.570289 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 17:59:51 crc kubenswrapper[4853]: E1209 17:59:51.569475 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.159647 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7"] Dec 09 18:00:00 crc kubenswrapper[4853]: E1209 18:00:00.160780 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerName="extract-utilities" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.160799 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerName="extract-utilities" Dec 09 18:00:00 crc kubenswrapper[4853]: E1209 18:00:00.160820 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerName="registry-server" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.160829 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerName="registry-server" Dec 09 18:00:00 crc kubenswrapper[4853]: E1209 18:00:00.160844 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerName="extract-content" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.160852 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerName="extract-content" Dec 09 18:00:00 crc kubenswrapper[4853]: E1209 18:00:00.160874 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63475b77-8915-4566-844c-7d9ebd035fa8" containerName="extract-content" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.160882 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="63475b77-8915-4566-844c-7d9ebd035fa8" containerName="extract-content" Dec 09 18:00:00 crc kubenswrapper[4853]: E1209 18:00:00.160911 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63475b77-8915-4566-844c-7d9ebd035fa8" containerName="extract-utilities" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.160918 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="63475b77-8915-4566-844c-7d9ebd035fa8" containerName="extract-utilities" Dec 09 18:00:00 crc kubenswrapper[4853]: E1209 18:00:00.160972 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63475b77-8915-4566-844c-7d9ebd035fa8" containerName="registry-server" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.160983 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="63475b77-8915-4566-844c-7d9ebd035fa8" containerName="registry-server" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.161255 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b76e3e-830c-4415-92ee-bfa4956f6688" containerName="registry-server" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.161280 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="63475b77-8915-4566-844c-7d9ebd035fa8" containerName="registry-server" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.162404 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.164778 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.165098 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.175565 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7"] Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.339444 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7pc9\" (UniqueName: \"kubernetes.io/projected/ac1a63d0-2087-48f5-8424-6fda3bf1130c-kube-api-access-m7pc9\") pod \"collect-profiles-29421720-zl7f7\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.339827 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1a63d0-2087-48f5-8424-6fda3bf1130c-config-volume\") pod \"collect-profiles-29421720-zl7f7\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.339914 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1a63d0-2087-48f5-8424-6fda3bf1130c-secret-volume\") pod \"collect-profiles-29421720-zl7f7\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.441645 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1a63d0-2087-48f5-8424-6fda3bf1130c-secret-volume\") pod \"collect-profiles-29421720-zl7f7\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.441847 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7pc9\" (UniqueName: \"kubernetes.io/projected/ac1a63d0-2087-48f5-8424-6fda3bf1130c-kube-api-access-m7pc9\") pod \"collect-profiles-29421720-zl7f7\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.441897 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1a63d0-2087-48f5-8424-6fda3bf1130c-config-volume\") pod \"collect-profiles-29421720-zl7f7\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.442710 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1a63d0-2087-48f5-8424-6fda3bf1130c-config-volume\") pod \"collect-profiles-29421720-zl7f7\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.450975 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1a63d0-2087-48f5-8424-6fda3bf1130c-secret-volume\") pod \"collect-profiles-29421720-zl7f7\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.461774 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7pc9\" (UniqueName: \"kubernetes.io/projected/ac1a63d0-2087-48f5-8424-6fda3bf1130c-kube-api-access-m7pc9\") pod \"collect-profiles-29421720-zl7f7\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.495376 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:00 crc kubenswrapper[4853]: E1209 18:00:00.570798 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:00:00 crc kubenswrapper[4853]: I1209 18:00:00.970688 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7"] Dec 09 18:00:01 crc kubenswrapper[4853]: I1209 18:00:01.843047 4853 generic.go:334] "Generic (PLEG): container finished" podID="ac1a63d0-2087-48f5-8424-6fda3bf1130c" containerID="2f9448cc53ef0cab7ebd9006caad750bfa3c06ca63836fc7a37803315ea99759" exitCode=0 Dec 09 18:00:01 crc kubenswrapper[4853]: I1209 18:00:01.843192 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" event={"ID":"ac1a63d0-2087-48f5-8424-6fda3bf1130c","Type":"ContainerDied","Data":"2f9448cc53ef0cab7ebd9006caad750bfa3c06ca63836fc7a37803315ea99759"} Dec 09 18:00:01 crc kubenswrapper[4853]: I1209 18:00:01.843352 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" event={"ID":"ac1a63d0-2087-48f5-8424-6fda3bf1130c","Type":"ContainerStarted","Data":"34729b414e9c0051fa97e7dcf2f2013ceaf24f0d261a4285a4543bcba29959cf"} Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.257474 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.343498 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1a63d0-2087-48f5-8424-6fda3bf1130c-secret-volume\") pod \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.343718 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7pc9\" (UniqueName: \"kubernetes.io/projected/ac1a63d0-2087-48f5-8424-6fda3bf1130c-kube-api-access-m7pc9\") pod \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.343850 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1a63d0-2087-48f5-8424-6fda3bf1130c-config-volume\") pod \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\" (UID: \"ac1a63d0-2087-48f5-8424-6fda3bf1130c\") " Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.345424 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1a63d0-2087-48f5-8424-6fda3bf1130c-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac1a63d0-2087-48f5-8424-6fda3bf1130c" (UID: "ac1a63d0-2087-48f5-8424-6fda3bf1130c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.350190 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1a63d0-2087-48f5-8424-6fda3bf1130c-kube-api-access-m7pc9" (OuterVolumeSpecName: "kube-api-access-m7pc9") pod "ac1a63d0-2087-48f5-8424-6fda3bf1130c" (UID: "ac1a63d0-2087-48f5-8424-6fda3bf1130c"). InnerVolumeSpecName "kube-api-access-m7pc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.352552 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1a63d0-2087-48f5-8424-6fda3bf1130c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac1a63d0-2087-48f5-8424-6fda3bf1130c" (UID: "ac1a63d0-2087-48f5-8424-6fda3bf1130c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.447103 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1a63d0-2087-48f5-8424-6fda3bf1130c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.447152 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7pc9\" (UniqueName: \"kubernetes.io/projected/ac1a63d0-2087-48f5-8424-6fda3bf1130c-kube-api-access-m7pc9\") on node \"crc\" DevicePath \"\"" Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.447165 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1a63d0-2087-48f5-8424-6fda3bf1130c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.580539 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:00:03 crc kubenswrapper[4853]: E1209 18:00:03.580979 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:00:03 crc kubenswrapper[4853]: E1209 18:00:03.581682 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.863505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" event={"ID":"ac1a63d0-2087-48f5-8424-6fda3bf1130c","Type":"ContainerDied","Data":"34729b414e9c0051fa97e7dcf2f2013ceaf24f0d261a4285a4543bcba29959cf"} Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.863752 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421720-zl7f7" Dec 09 18:00:03 crc kubenswrapper[4853]: I1209 18:00:03.863766 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34729b414e9c0051fa97e7dcf2f2013ceaf24f0d261a4285a4543bcba29959cf" Dec 09 18:00:04 crc kubenswrapper[4853]: I1209 18:00:04.357728 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp"] Dec 09 18:00:04 crc kubenswrapper[4853]: I1209 18:00:04.371882 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421675-9mfwp"] Dec 09 18:00:05 crc kubenswrapper[4853]: I1209 18:00:05.594680 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a114a56-f0bf-424e-b627-93415929b182" path="/var/lib/kubelet/pods/9a114a56-f0bf-424e-b627-93415929b182/volumes" Dec 09 18:00:14 crc kubenswrapper[4853]: I1209 18:00:14.568568 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:00:14 crc kubenswrapper[4853]: E1209 18:00:14.569843 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:00:14 crc kubenswrapper[4853]: E1209 18:00:14.571861 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:00:15 crc kubenswrapper[4853]: E1209 18:00:15.569882 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:00:26 crc kubenswrapper[4853]: E1209 18:00:26.569993 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:00:27 crc kubenswrapper[4853]: I1209 18:00:27.567672 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:00:27 crc kubenswrapper[4853]: E1209 18:00:27.568207 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:00:28 crc kubenswrapper[4853]: E1209 18:00:28.568652 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:00:30 crc kubenswrapper[4853]: I1209 18:00:30.490324 4853 scope.go:117] "RemoveContainer" containerID="e76db3da77e451294e9b89cf2ccc67a2c556f9bae0dda33b4252cf6a40b042ec" Dec 09 18:00:37 crc kubenswrapper[4853]: E1209 18:00:37.570979 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:00:41 crc kubenswrapper[4853]: I1209 18:00:41.567994 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:00:41 crc kubenswrapper[4853]: E1209 18:00:41.568900 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:00:43 crc kubenswrapper[4853]: E1209 18:00:43.623288 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:00:48 crc kubenswrapper[4853]: E1209 18:00:48.569937 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:00:55 crc kubenswrapper[4853]: E1209 18:00:55.571508 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:00:56 crc kubenswrapper[4853]: I1209 18:00:56.568315 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:00:56 crc kubenswrapper[4853]: E1209 18:00:56.568598 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.149258 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29421721-8srlv"] Dec 09 18:01:00 crc kubenswrapper[4853]: E1209 18:01:00.150534 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1a63d0-2087-48f5-8424-6fda3bf1130c" containerName="collect-profiles" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.150554 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1a63d0-2087-48f5-8424-6fda3bf1130c" containerName="collect-profiles" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.150850 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1a63d0-2087-48f5-8424-6fda3bf1130c" containerName="collect-profiles" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.151812 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.207303 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421721-8srlv"] Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.247169 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-combined-ca-bundle\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.247347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-config-data\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.247372 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-fernet-keys\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.247402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcz55\" (UniqueName: \"kubernetes.io/projected/160001ba-a330-4185-b1a1-67bfbdda8cd9-kube-api-access-vcz55\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.350277 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-config-data\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.350359 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-fernet-keys\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.350419 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcz55\" (UniqueName: \"kubernetes.io/projected/160001ba-a330-4185-b1a1-67bfbdda8cd9-kube-api-access-vcz55\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.350493 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-combined-ca-bundle\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.385183 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-combined-ca-bundle\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.385355 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-config-data\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.389839 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-fernet-keys\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.401236 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcz55\" (UniqueName: \"kubernetes.io/projected/160001ba-a330-4185-b1a1-67bfbdda8cd9-kube-api-access-vcz55\") pod \"keystone-cron-29421721-8srlv\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.478994 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:00 crc kubenswrapper[4853]: W1209 18:01:00.993068 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod160001ba_a330_4185_b1a1_67bfbdda8cd9.slice/crio-2b129c65b8e71901fbdc58fbb63a9ca9ef76e009c2b268bfc52cfde24e4270c6 WatchSource:0}: Error finding container 2b129c65b8e71901fbdc58fbb63a9ca9ef76e009c2b268bfc52cfde24e4270c6: Status 404 returned error can't find the container with id 2b129c65b8e71901fbdc58fbb63a9ca9ef76e009c2b268bfc52cfde24e4270c6 Dec 09 18:01:00 crc kubenswrapper[4853]: I1209 18:01:00.996195 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421721-8srlv"] Dec 09 18:01:01 crc kubenswrapper[4853]: I1209 18:01:01.886909 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421721-8srlv" event={"ID":"160001ba-a330-4185-b1a1-67bfbdda8cd9","Type":"ContainerStarted","Data":"bd7e31b58aa5ca62ae6b8edb1d123479b14e110f4201696a5e744a973fa93a56"} Dec 09 18:01:01 crc kubenswrapper[4853]: I1209 18:01:01.887216 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421721-8srlv" event={"ID":"160001ba-a330-4185-b1a1-67bfbdda8cd9","Type":"ContainerStarted","Data":"2b129c65b8e71901fbdc58fbb63a9ca9ef76e009c2b268bfc52cfde24e4270c6"} Dec 09 18:01:01 crc kubenswrapper[4853]: I1209 18:01:01.912348 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29421721-8srlv" podStartSLOduration=1.912312795 podStartE2EDuration="1.912312795s" podCreationTimestamp="2025-12-09 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 18:01:01.911003039 +0000 UTC m=+3888.845742251" watchObservedRunningTime="2025-12-09 18:01:01.912312795 +0000 UTC m=+3888.847052017" Dec 09 18:01:02 crc kubenswrapper[4853]: E1209 18:01:02.568825 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:01:03 crc kubenswrapper[4853]: I1209 18:01:03.909634 4853 generic.go:334] "Generic (PLEG): container finished" podID="160001ba-a330-4185-b1a1-67bfbdda8cd9" containerID="bd7e31b58aa5ca62ae6b8edb1d123479b14e110f4201696a5e744a973fa93a56" exitCode=0 Dec 09 18:01:03 crc kubenswrapper[4853]: I1209 18:01:03.909679 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421721-8srlv" event={"ID":"160001ba-a330-4185-b1a1-67bfbdda8cd9","Type":"ContainerDied","Data":"bd7e31b58aa5ca62ae6b8edb1d123479b14e110f4201696a5e744a973fa93a56"} Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.354824 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.376465 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-combined-ca-bundle\") pod \"160001ba-a330-4185-b1a1-67bfbdda8cd9\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.376622 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-fernet-keys\") pod \"160001ba-a330-4185-b1a1-67bfbdda8cd9\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.376673 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-config-data\") pod \"160001ba-a330-4185-b1a1-67bfbdda8cd9\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.376732 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcz55\" (UniqueName: \"kubernetes.io/projected/160001ba-a330-4185-b1a1-67bfbdda8cd9-kube-api-access-vcz55\") pod \"160001ba-a330-4185-b1a1-67bfbdda8cd9\" (UID: \"160001ba-a330-4185-b1a1-67bfbdda8cd9\") " Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.383402 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160001ba-a330-4185-b1a1-67bfbdda8cd9-kube-api-access-vcz55" (OuterVolumeSpecName: "kube-api-access-vcz55") pod "160001ba-a330-4185-b1a1-67bfbdda8cd9" (UID: "160001ba-a330-4185-b1a1-67bfbdda8cd9"). InnerVolumeSpecName "kube-api-access-vcz55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.387133 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "160001ba-a330-4185-b1a1-67bfbdda8cd9" (UID: "160001ba-a330-4185-b1a1-67bfbdda8cd9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.453443 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "160001ba-a330-4185-b1a1-67bfbdda8cd9" (UID: "160001ba-a330-4185-b1a1-67bfbdda8cd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.475833 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-config-data" (OuterVolumeSpecName: "config-data") pod "160001ba-a330-4185-b1a1-67bfbdda8cd9" (UID: "160001ba-a330-4185-b1a1-67bfbdda8cd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.478734 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.478767 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.478781 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcz55\" (UniqueName: \"kubernetes.io/projected/160001ba-a330-4185-b1a1-67bfbdda8cd9-kube-api-access-vcz55\") on node \"crc\" DevicePath \"\"" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.478795 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160001ba-a330-4185-b1a1-67bfbdda8cd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.964158 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421721-8srlv" event={"ID":"160001ba-a330-4185-b1a1-67bfbdda8cd9","Type":"ContainerDied","Data":"2b129c65b8e71901fbdc58fbb63a9ca9ef76e009c2b268bfc52cfde24e4270c6"} Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.964672 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b129c65b8e71901fbdc58fbb63a9ca9ef76e009c2b268bfc52cfde24e4270c6" Dec 09 18:01:05 crc kubenswrapper[4853]: I1209 18:01:05.964789 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421721-8srlv" Dec 09 18:01:06 crc kubenswrapper[4853]: E1209 18:01:06.570956 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:01:09 crc kubenswrapper[4853]: I1209 18:01:09.566891 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:01:09 crc kubenswrapper[4853]: E1209 18:01:09.567576 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:01:17 crc kubenswrapper[4853]: E1209 18:01:17.575894 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:01:20 crc kubenswrapper[4853]: I1209 18:01:20.567630 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:01:20 crc kubenswrapper[4853]: E1209 18:01:20.568697 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:01:20 crc kubenswrapper[4853]: E1209 18:01:20.570814 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:01:30 crc kubenswrapper[4853]: I1209 18:01:30.570013 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 18:01:30 crc kubenswrapper[4853]: E1209 18:01:30.675793 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:01:30 crc kubenswrapper[4853]: E1209 18:01:30.675862 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:01:30 crc kubenswrapper[4853]: E1209 18:01:30.675995 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:01:30 crc kubenswrapper[4853]: E1209 18:01:30.677410 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:01:31 crc kubenswrapper[4853]: E1209 18:01:31.701750 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:01:31 crc kubenswrapper[4853]: E1209 18:01:31.702057 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:01:31 crc kubenswrapper[4853]: E1209 18:01:31.702222 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:01:31 crc kubenswrapper[4853]: E1209 18:01:31.703419 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:01:32 crc kubenswrapper[4853]: I1209 18:01:32.567066 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:01:32 crc kubenswrapper[4853]: E1209 18:01:32.567544 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:01:42 crc kubenswrapper[4853]: E1209 18:01:42.577257 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:01:45 crc kubenswrapper[4853]: E1209 18:01:45.570236 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:01:47 crc kubenswrapper[4853]: I1209 18:01:47.567252 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:01:47 crc kubenswrapper[4853]: E1209 18:01:47.567815 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:01:54 crc kubenswrapper[4853]: E1209 18:01:54.569376 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:02:00 crc kubenswrapper[4853]: E1209 18:02:00.570851 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:02:01 crc kubenswrapper[4853]: I1209 18:02:01.567754 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:02:01 crc kubenswrapper[4853]: E1209 18:02:01.568266 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.158734 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l4g2q"] Dec 09 18:02:05 crc kubenswrapper[4853]: E1209 18:02:05.159732 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160001ba-a330-4185-b1a1-67bfbdda8cd9" containerName="keystone-cron" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.159745 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="160001ba-a330-4185-b1a1-67bfbdda8cd9" containerName="keystone-cron" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.160003 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="160001ba-a330-4185-b1a1-67bfbdda8cd9" containerName="keystone-cron" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.161731 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.186719 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4g2q"] Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.213248 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-utilities\") pod \"certified-operators-l4g2q\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.213391 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-catalog-content\") pod \"certified-operators-l4g2q\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.213555 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fss7b\" (UniqueName: \"kubernetes.io/projected/b70216e4-610d-4995-9835-a86c68b88e7a-kube-api-access-fss7b\") pod \"certified-operators-l4g2q\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.316417 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-catalog-content\") pod \"certified-operators-l4g2q\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.316921 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fss7b\" (UniqueName: \"kubernetes.io/projected/b70216e4-610d-4995-9835-a86c68b88e7a-kube-api-access-fss7b\") pod \"certified-operators-l4g2q\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.317010 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-catalog-content\") pod \"certified-operators-l4g2q\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.317219 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-utilities\") pod \"certified-operators-l4g2q\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.317726 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-utilities\") pod \"certified-operators-l4g2q\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.339566 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fss7b\" (UniqueName: \"kubernetes.io/projected/b70216e4-610d-4995-9835-a86c68b88e7a-kube-api-access-fss7b\") pod \"certified-operators-l4g2q\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:05 crc kubenswrapper[4853]: I1209 18:02:05.500902 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:06 crc kubenswrapper[4853]: I1209 18:02:06.044675 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4g2q"] Dec 09 18:02:06 crc kubenswrapper[4853]: I1209 18:02:06.628280 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4g2q" event={"ID":"b70216e4-610d-4995-9835-a86c68b88e7a","Type":"ContainerStarted","Data":"535656b8720ed5fc4d443008307d325b241dc19d91cb0fec81bd9fba9765841d"} Dec 09 18:02:07 crc kubenswrapper[4853]: I1209 18:02:07.639635 4853 generic.go:334] "Generic (PLEG): container finished" podID="b70216e4-610d-4995-9835-a86c68b88e7a" containerID="b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27" exitCode=0 Dec 09 18:02:07 crc kubenswrapper[4853]: I1209 18:02:07.639703 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4g2q" event={"ID":"b70216e4-610d-4995-9835-a86c68b88e7a","Type":"ContainerDied","Data":"b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27"} Dec 09 18:02:08 crc kubenswrapper[4853]: E1209 18:02:08.569649 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:02:09 crc kubenswrapper[4853]: I1209 18:02:09.661002 4853 generic.go:334] "Generic (PLEG): container finished" podID="b70216e4-610d-4995-9835-a86c68b88e7a" containerID="6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257" exitCode=0 Dec 09 18:02:09 crc kubenswrapper[4853]: I1209 18:02:09.661073 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4g2q" event={"ID":"b70216e4-610d-4995-9835-a86c68b88e7a","Type":"ContainerDied","Data":"6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257"} Dec 09 18:02:12 crc kubenswrapper[4853]: I1209 18:02:12.567832 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:02:12 crc kubenswrapper[4853]: E1209 18:02:12.569717 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:02:12 crc kubenswrapper[4853]: I1209 18:02:12.697359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4g2q" event={"ID":"b70216e4-610d-4995-9835-a86c68b88e7a","Type":"ContainerStarted","Data":"9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40"} Dec 09 18:02:12 crc kubenswrapper[4853]: I1209 18:02:12.722457 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l4g2q" podStartSLOduration=3.183034624 podStartE2EDuration="7.722439467s" podCreationTimestamp="2025-12-09 18:02:05 +0000 UTC" firstStartedPulling="2025-12-09 18:02:07.642418578 +0000 UTC m=+3954.577157760" lastFinishedPulling="2025-12-09 18:02:12.181823421 +0000 UTC m=+3959.116562603" observedRunningTime="2025-12-09 18:02:12.719908139 +0000 UTC m=+3959.654647341" watchObservedRunningTime="2025-12-09 18:02:12.722439467 +0000 UTC m=+3959.657178639" Dec 09 18:02:14 crc kubenswrapper[4853]: E1209 18:02:14.569882 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:02:15 crc kubenswrapper[4853]: I1209 18:02:15.502696 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:15 crc kubenswrapper[4853]: I1209 18:02:15.502981 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:15 crc kubenswrapper[4853]: I1209 18:02:15.583670 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:23 crc kubenswrapper[4853]: E1209 18:02:23.579920 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:02:25 crc kubenswrapper[4853]: I1209 18:02:25.564844 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:25 crc kubenswrapper[4853]: I1209 18:02:25.567594 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:02:25 crc kubenswrapper[4853]: E1209 18:02:25.568052 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:02:25 crc kubenswrapper[4853]: I1209 18:02:25.838545 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l4g2q"] Dec 09 18:02:25 crc kubenswrapper[4853]: I1209 18:02:25.865409 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l4g2q" podUID="b70216e4-610d-4995-9835-a86c68b88e7a" containerName="registry-server" containerID="cri-o://9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40" gracePeriod=2 Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.451003 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:26 crc kubenswrapper[4853]: E1209 18:02:26.568579 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.575863 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fss7b\" (UniqueName: \"kubernetes.io/projected/b70216e4-610d-4995-9835-a86c68b88e7a-kube-api-access-fss7b\") pod \"b70216e4-610d-4995-9835-a86c68b88e7a\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.575925 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-utilities\") pod \"b70216e4-610d-4995-9835-a86c68b88e7a\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.576231 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-catalog-content\") pod \"b70216e4-610d-4995-9835-a86c68b88e7a\" (UID: \"b70216e4-610d-4995-9835-a86c68b88e7a\") " Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.577215 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-utilities" (OuterVolumeSpecName: "utilities") pod "b70216e4-610d-4995-9835-a86c68b88e7a" (UID: "b70216e4-610d-4995-9835-a86c68b88e7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.586899 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70216e4-610d-4995-9835-a86c68b88e7a-kube-api-access-fss7b" (OuterVolumeSpecName: "kube-api-access-fss7b") pod "b70216e4-610d-4995-9835-a86c68b88e7a" (UID: "b70216e4-610d-4995-9835-a86c68b88e7a"). InnerVolumeSpecName "kube-api-access-fss7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.629716 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b70216e4-610d-4995-9835-a86c68b88e7a" (UID: "b70216e4-610d-4995-9835-a86c68b88e7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.679986 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.680084 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fss7b\" (UniqueName: \"kubernetes.io/projected/b70216e4-610d-4995-9835-a86c68b88e7a-kube-api-access-fss7b\") on node \"crc\" DevicePath \"\"" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.680107 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70216e4-610d-4995-9835-a86c68b88e7a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.879682 4853 generic.go:334] "Generic (PLEG): container finished" podID="b70216e4-610d-4995-9835-a86c68b88e7a" containerID="9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40" exitCode=0 Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.879734 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4g2q" event={"ID":"b70216e4-610d-4995-9835-a86c68b88e7a","Type":"ContainerDied","Data":"9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40"} Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.879773 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4g2q" event={"ID":"b70216e4-610d-4995-9835-a86c68b88e7a","Type":"ContainerDied","Data":"535656b8720ed5fc4d443008307d325b241dc19d91cb0fec81bd9fba9765841d"} Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.879795 4853 scope.go:117] "RemoveContainer" containerID="9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.879802 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4g2q" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.916762 4853 scope.go:117] "RemoveContainer" containerID="6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257" Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.941365 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l4g2q"] Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.961966 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l4g2q"] Dec 09 18:02:26 crc kubenswrapper[4853]: I1209 18:02:26.969107 4853 scope.go:117] "RemoveContainer" containerID="b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27" Dec 09 18:02:27 crc kubenswrapper[4853]: I1209 18:02:27.030844 4853 scope.go:117] "RemoveContainer" containerID="9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40" Dec 09 18:02:27 crc kubenswrapper[4853]: E1209 18:02:27.031499 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40\": container with ID starting with 9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40 not found: ID does not exist" containerID="9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40" Dec 09 18:02:27 crc kubenswrapper[4853]: I1209 18:02:27.031565 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40"} err="failed to get container status \"9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40\": rpc error: code = NotFound desc = could not find container \"9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40\": container with ID starting with 9530329ee738dcee352c0ec0f9a87b26fdff558e30afbe7b620d5c3c21342e40 not found: ID does not exist" Dec 09 18:02:27 crc kubenswrapper[4853]: I1209 18:02:27.031691 4853 scope.go:117] "RemoveContainer" containerID="6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257" Dec 09 18:02:27 crc kubenswrapper[4853]: E1209 18:02:27.032282 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257\": container with ID starting with 6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257 not found: ID does not exist" containerID="6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257" Dec 09 18:02:27 crc kubenswrapper[4853]: I1209 18:02:27.032352 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257"} err="failed to get container status \"6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257\": rpc error: code = NotFound desc = could not find container \"6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257\": container with ID starting with 6a7cbb74950086988913b52678ffaa64ea5db7dec9e050c9bdae5606d7b7d257 not found: ID does not exist" Dec 09 18:02:27 crc kubenswrapper[4853]: I1209 18:02:27.032396 4853 scope.go:117] "RemoveContainer" containerID="b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27" Dec 09 18:02:27 crc kubenswrapper[4853]: E1209 18:02:27.032849 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27\": container with ID starting with b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27 not found: ID does not exist" containerID="b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27" Dec 09 18:02:27 crc kubenswrapper[4853]: I1209 18:02:27.032892 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27"} err="failed to get container status \"b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27\": rpc error: code = NotFound desc = could not find container \"b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27\": container with ID starting with b7c1a8c8b7041ef9fe61d06eadc7bf43364ac9ec775f370a0c53b2733e049a27 not found: ID does not exist" Dec 09 18:02:27 crc kubenswrapper[4853]: I1209 18:02:27.578743 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b70216e4-610d-4995-9835-a86c68b88e7a" path="/var/lib/kubelet/pods/b70216e4-610d-4995-9835-a86c68b88e7a/volumes" Dec 09 18:02:33 crc kubenswrapper[4853]: I1209 18:02:33.844264 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qtwd6"] Dec 09 18:02:33 crc kubenswrapper[4853]: E1209 18:02:33.845831 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70216e4-610d-4995-9835-a86c68b88e7a" containerName="extract-content" Dec 09 18:02:33 crc kubenswrapper[4853]: I1209 18:02:33.845858 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70216e4-610d-4995-9835-a86c68b88e7a" containerName="extract-content" Dec 09 18:02:33 crc kubenswrapper[4853]: E1209 18:02:33.845894 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70216e4-610d-4995-9835-a86c68b88e7a" containerName="extract-utilities" Dec 09 18:02:33 crc kubenswrapper[4853]: I1209 18:02:33.845908 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70216e4-610d-4995-9835-a86c68b88e7a" containerName="extract-utilities" Dec 09 18:02:33 crc kubenswrapper[4853]: E1209 18:02:33.845938 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70216e4-610d-4995-9835-a86c68b88e7a" containerName="registry-server" Dec 09 18:02:33 crc kubenswrapper[4853]: I1209 18:02:33.845952 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70216e4-610d-4995-9835-a86c68b88e7a" containerName="registry-server" Dec 09 18:02:33 crc kubenswrapper[4853]: I1209 18:02:33.846456 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70216e4-610d-4995-9835-a86c68b88e7a" containerName="registry-server" Dec 09 18:02:33 crc kubenswrapper[4853]: I1209 18:02:33.849907 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:33 crc kubenswrapper[4853]: I1209 18:02:33.860784 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtwd6"] Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.005877 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwfsr\" (UniqueName: \"kubernetes.io/projected/83fda1fb-22bb-489f-a86c-8d50777f656d-kube-api-access-nwfsr\") pod \"redhat-operators-qtwd6\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.005949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-utilities\") pod \"redhat-operators-qtwd6\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.006891 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-catalog-content\") pod \"redhat-operators-qtwd6\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.109859 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-catalog-content\") pod \"redhat-operators-qtwd6\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.110041 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwfsr\" (UniqueName: \"kubernetes.io/projected/83fda1fb-22bb-489f-a86c-8d50777f656d-kube-api-access-nwfsr\") pod \"redhat-operators-qtwd6\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.110101 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-utilities\") pod \"redhat-operators-qtwd6\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.110570 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-catalog-content\") pod \"redhat-operators-qtwd6\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.110702 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-utilities\") pod \"redhat-operators-qtwd6\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.133539 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwfsr\" (UniqueName: \"kubernetes.io/projected/83fda1fb-22bb-489f-a86c-8d50777f656d-kube-api-access-nwfsr\") pod \"redhat-operators-qtwd6\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.177185 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.672668 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtwd6"] Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.969219 4853 generic.go:334] "Generic (PLEG): container finished" podID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerID="1685ecc626ada8f1e2fe98fff57ec2e4106798403806937c047667a34644d484" exitCode=0 Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.969257 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtwd6" event={"ID":"83fda1fb-22bb-489f-a86c-8d50777f656d","Type":"ContainerDied","Data":"1685ecc626ada8f1e2fe98fff57ec2e4106798403806937c047667a34644d484"} Dec 09 18:02:34 crc kubenswrapper[4853]: I1209 18:02:34.969279 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtwd6" event={"ID":"83fda1fb-22bb-489f-a86c-8d50777f656d","Type":"ContainerStarted","Data":"89cabe39242e4f78c6f5b09ef3c594371634f9f8024e99c6e70a6eb8d591d0c8"} Dec 09 18:02:35 crc kubenswrapper[4853]: E1209 18:02:35.572105 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:02:37 crc kubenswrapper[4853]: I1209 18:02:36.999878 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtwd6" event={"ID":"83fda1fb-22bb-489f-a86c-8d50777f656d","Type":"ContainerStarted","Data":"bb30586f3bae22b19117ac08644104ebc12d5a813bb362b63a011766b088b18d"} Dec 09 18:02:38 crc kubenswrapper[4853]: I1209 18:02:38.568412 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:02:38 crc kubenswrapper[4853]: E1209 18:02:38.569397 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:02:39 crc kubenswrapper[4853]: I1209 18:02:39.019752 4853 generic.go:334] "Generic (PLEG): container finished" podID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerID="bb30586f3bae22b19117ac08644104ebc12d5a813bb362b63a011766b088b18d" exitCode=0 Dec 09 18:02:39 crc kubenswrapper[4853]: I1209 18:02:39.019800 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtwd6" event={"ID":"83fda1fb-22bb-489f-a86c-8d50777f656d","Type":"ContainerDied","Data":"bb30586f3bae22b19117ac08644104ebc12d5a813bb362b63a011766b088b18d"} Dec 09 18:02:40 crc kubenswrapper[4853]: I1209 18:02:40.034787 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtwd6" event={"ID":"83fda1fb-22bb-489f-a86c-8d50777f656d","Type":"ContainerStarted","Data":"a7478fb88ab6e72f71d34f077601084b6ad40e0c5ad3cb9005998dc2b46f1865"} Dec 09 18:02:40 crc kubenswrapper[4853]: I1209 18:02:40.062766 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qtwd6" podStartSLOduration=2.490249474 podStartE2EDuration="7.062747084s" podCreationTimestamp="2025-12-09 18:02:33 +0000 UTC" firstStartedPulling="2025-12-09 18:02:34.971282821 +0000 UTC m=+3981.906022003" lastFinishedPulling="2025-12-09 18:02:39.543780421 +0000 UTC m=+3986.478519613" observedRunningTime="2025-12-09 18:02:40.055659329 +0000 UTC m=+3986.990398531" watchObservedRunningTime="2025-12-09 18:02:40.062747084 +0000 UTC m=+3986.997486266" Dec 09 18:02:40 crc kubenswrapper[4853]: E1209 18:02:40.569904 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:02:44 crc kubenswrapper[4853]: I1209 18:02:44.177486 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:44 crc kubenswrapper[4853]: I1209 18:02:44.178119 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:45 crc kubenswrapper[4853]: I1209 18:02:45.238424 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qtwd6" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerName="registry-server" probeResult="failure" output=< Dec 09 18:02:45 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 18:02:45 crc kubenswrapper[4853]: > Dec 09 18:02:49 crc kubenswrapper[4853]: E1209 18:02:49.571209 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:02:52 crc kubenswrapper[4853]: E1209 18:02:52.570010 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:02:53 crc kubenswrapper[4853]: I1209 18:02:53.579731 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:02:53 crc kubenswrapper[4853]: E1209 18:02:53.580321 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:02:54 crc kubenswrapper[4853]: I1209 18:02:54.257872 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:54 crc kubenswrapper[4853]: I1209 18:02:54.336864 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:54 crc kubenswrapper[4853]: I1209 18:02:54.496306 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qtwd6"] Dec 09 18:02:56 crc kubenswrapper[4853]: I1209 18:02:56.328155 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qtwd6" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerName="registry-server" containerID="cri-o://a7478fb88ab6e72f71d34f077601084b6ad40e0c5ad3cb9005998dc2b46f1865" gracePeriod=2 Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.341839 4853 generic.go:334] "Generic (PLEG): container finished" podID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerID="a7478fb88ab6e72f71d34f077601084b6ad40e0c5ad3cb9005998dc2b46f1865" exitCode=0 Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.341895 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtwd6" event={"ID":"83fda1fb-22bb-489f-a86c-8d50777f656d","Type":"ContainerDied","Data":"a7478fb88ab6e72f71d34f077601084b6ad40e0c5ad3cb9005998dc2b46f1865"} Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.342905 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtwd6" event={"ID":"83fda1fb-22bb-489f-a86c-8d50777f656d","Type":"ContainerDied","Data":"89cabe39242e4f78c6f5b09ef3c594371634f9f8024e99c6e70a6eb8d591d0c8"} Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.342934 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89cabe39242e4f78c6f5b09ef3c594371634f9f8024e99c6e70a6eb8d591d0c8" Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.468795 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.551393 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-utilities\") pod \"83fda1fb-22bb-489f-a86c-8d50777f656d\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.551768 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwfsr\" (UniqueName: \"kubernetes.io/projected/83fda1fb-22bb-489f-a86c-8d50777f656d-kube-api-access-nwfsr\") pod \"83fda1fb-22bb-489f-a86c-8d50777f656d\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.551820 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-catalog-content\") pod \"83fda1fb-22bb-489f-a86c-8d50777f656d\" (UID: \"83fda1fb-22bb-489f-a86c-8d50777f656d\") " Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.552968 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-utilities" (OuterVolumeSpecName: "utilities") pod "83fda1fb-22bb-489f-a86c-8d50777f656d" (UID: "83fda1fb-22bb-489f-a86c-8d50777f656d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.570898 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fda1fb-22bb-489f-a86c-8d50777f656d-kube-api-access-nwfsr" (OuterVolumeSpecName: "kube-api-access-nwfsr") pod "83fda1fb-22bb-489f-a86c-8d50777f656d" (UID: "83fda1fb-22bb-489f-a86c-8d50777f656d"). InnerVolumeSpecName "kube-api-access-nwfsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.653920 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwfsr\" (UniqueName: \"kubernetes.io/projected/83fda1fb-22bb-489f-a86c-8d50777f656d-kube-api-access-nwfsr\") on node \"crc\" DevicePath \"\"" Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.653960 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.684331 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83fda1fb-22bb-489f-a86c-8d50777f656d" (UID: "83fda1fb-22bb-489f-a86c-8d50777f656d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:02:57 crc kubenswrapper[4853]: I1209 18:02:57.755865 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fda1fb-22bb-489f-a86c-8d50777f656d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:02:58 crc kubenswrapper[4853]: I1209 18:02:58.356718 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtwd6" Dec 09 18:02:58 crc kubenswrapper[4853]: I1209 18:02:58.397351 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qtwd6"] Dec 09 18:02:58 crc kubenswrapper[4853]: I1209 18:02:58.407481 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qtwd6"] Dec 09 18:02:59 crc kubenswrapper[4853]: I1209 18:02:59.582408 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" path="/var/lib/kubelet/pods/83fda1fb-22bb-489f-a86c-8d50777f656d/volumes" Dec 09 18:03:02 crc kubenswrapper[4853]: E1209 18:03:02.571415 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:03:04 crc kubenswrapper[4853]: E1209 18:03:04.569436 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:03:08 crc kubenswrapper[4853]: I1209 18:03:08.567579 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:03:08 crc kubenswrapper[4853]: E1209 18:03:08.568536 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:03:13 crc kubenswrapper[4853]: E1209 18:03:13.586098 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:03:17 crc kubenswrapper[4853]: E1209 18:03:17.573956 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:03:20 crc kubenswrapper[4853]: I1209 18:03:20.568806 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:03:20 crc kubenswrapper[4853]: E1209 18:03:20.569529 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:03:25 crc kubenswrapper[4853]: E1209 18:03:25.603837 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:03:32 crc kubenswrapper[4853]: E1209 18:03:32.570117 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:03:33 crc kubenswrapper[4853]: I1209 18:03:33.568423 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:03:34 crc kubenswrapper[4853]: I1209 18:03:34.855922 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"695a908365ab3a75177fe8f8d8f182a968490547570450da1a3e1c38103c1f3e"} Dec 09 18:03:39 crc kubenswrapper[4853]: E1209 18:03:39.570120 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:03:45 crc kubenswrapper[4853]: E1209 18:03:45.571055 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:03:50 crc kubenswrapper[4853]: E1209 18:03:50.569944 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:03:59 crc kubenswrapper[4853]: E1209 18:03:59.570747 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:04:02 crc kubenswrapper[4853]: E1209 18:04:02.573296 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:04:11 crc kubenswrapper[4853]: E1209 18:04:11.569698 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:04:14 crc kubenswrapper[4853]: E1209 18:04:14.569511 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:04:26 crc kubenswrapper[4853]: E1209 18:04:26.570948 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:04:26 crc kubenswrapper[4853]: E1209 18:04:26.570978 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:04:37 crc kubenswrapper[4853]: E1209 18:04:37.579213 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:04:41 crc kubenswrapper[4853]: E1209 18:04:41.572323 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:04:52 crc kubenswrapper[4853]: E1209 18:04:52.574636 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:04:52 crc kubenswrapper[4853]: E1209 18:04:52.574759 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:05:03 crc kubenswrapper[4853]: E1209 18:05:03.580089 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:05:03 crc kubenswrapper[4853]: E1209 18:05:03.580156 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:05:11 crc kubenswrapper[4853]: I1209 18:05:11.076127 4853 generic.go:334] "Generic (PLEG): container finished" podID="fb81bc18-40f0-48b1-94a1-c0f4ca35e36c" containerID="0df72078dd3b4441c2a1f8e64920911514451eee19e46c31eac83d34f5f18791" exitCode=2 Dec 09 18:05:11 crc kubenswrapper[4853]: I1209 18:05:11.076194 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" event={"ID":"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c","Type":"ContainerDied","Data":"0df72078dd3b4441c2a1f8e64920911514451eee19e46c31eac83d34f5f18791"} Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.604237 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.715723 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-inventory\") pod \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.715853 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h495n\" (UniqueName: \"kubernetes.io/projected/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-kube-api-access-h495n\") pod \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.715914 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-ssh-key\") pod \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\" (UID: \"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c\") " Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.722275 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-kube-api-access-h495n" (OuterVolumeSpecName: "kube-api-access-h495n") pod "fb81bc18-40f0-48b1-94a1-c0f4ca35e36c" (UID: "fb81bc18-40f0-48b1-94a1-c0f4ca35e36c"). InnerVolumeSpecName "kube-api-access-h495n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.746819 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fb81bc18-40f0-48b1-94a1-c0f4ca35e36c" (UID: "fb81bc18-40f0-48b1-94a1-c0f4ca35e36c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.747665 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-inventory" (OuterVolumeSpecName: "inventory") pod "fb81bc18-40f0-48b1-94a1-c0f4ca35e36c" (UID: "fb81bc18-40f0-48b1-94a1-c0f4ca35e36c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.818467 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.818512 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h495n\" (UniqueName: \"kubernetes.io/projected/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-kube-api-access-h495n\") on node \"crc\" DevicePath \"\"" Dec 09 18:05:12 crc kubenswrapper[4853]: I1209 18:05:12.818522 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb81bc18-40f0-48b1-94a1-c0f4ca35e36c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 18:05:13 crc kubenswrapper[4853]: I1209 18:05:13.103136 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" event={"ID":"fb81bc18-40f0-48b1-94a1-c0f4ca35e36c","Type":"ContainerDied","Data":"d843e04724a3d5aa7144f597e0f02f0cf79888e67042e8ad2f124666f73649cd"} Dec 09 18:05:13 crc kubenswrapper[4853]: I1209 18:05:13.103220 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d843e04724a3d5aa7144f597e0f02f0cf79888e67042e8ad2f124666f73649cd" Dec 09 18:05:13 crc kubenswrapper[4853]: I1209 18:05:13.103299 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x" Dec 09 18:05:16 crc kubenswrapper[4853]: E1209 18:05:16.571439 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:05:16 crc kubenswrapper[4853]: E1209 18:05:16.572022 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:05:29 crc kubenswrapper[4853]: E1209 18:05:29.569555 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:05:31 crc kubenswrapper[4853]: E1209 18:05:31.570220 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:05:44 crc kubenswrapper[4853]: E1209 18:05:44.572045 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:05:45 crc kubenswrapper[4853]: E1209 18:05:45.569144 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:05:55 crc kubenswrapper[4853]: E1209 18:05:55.569172 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:05:56 crc kubenswrapper[4853]: E1209 18:05:56.570415 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:05:58 crc kubenswrapper[4853]: I1209 18:05:58.593405 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:05:58 crc kubenswrapper[4853]: I1209 18:05:58.593894 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:06:07 crc kubenswrapper[4853]: E1209 18:06:07.569379 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:06:09 crc kubenswrapper[4853]: E1209 18:06:09.570130 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:06:19 crc kubenswrapper[4853]: E1209 18:06:19.569186 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:06:24 crc kubenswrapper[4853]: E1209 18:06:24.570322 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:06:28 crc kubenswrapper[4853]: I1209 18:06:28.592645 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:06:28 crc kubenswrapper[4853]: I1209 18:06:28.593310 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:06:34 crc kubenswrapper[4853]: I1209 18:06:34.571449 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 18:06:34 crc kubenswrapper[4853]: E1209 18:06:34.677943 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:06:34 crc kubenswrapper[4853]: E1209 18:06:34.678349 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:06:34 crc kubenswrapper[4853]: E1209 18:06:34.678536 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:06:34 crc kubenswrapper[4853]: E1209 18:06:34.680274 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:06:36 crc kubenswrapper[4853]: E1209 18:06:36.675676 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:06:36 crc kubenswrapper[4853]: E1209 18:06:36.676058 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:06:36 crc kubenswrapper[4853]: E1209 18:06:36.676250 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:06:36 crc kubenswrapper[4853]: E1209 18:06:36.677618 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:06:49 crc kubenswrapper[4853]: E1209 18:06:49.568905 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:06:49 crc kubenswrapper[4853]: E1209 18:06:49.569188 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:06:58 crc kubenswrapper[4853]: I1209 18:06:58.593308 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:06:58 crc kubenswrapper[4853]: I1209 18:06:58.595070 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:06:58 crc kubenswrapper[4853]: I1209 18:06:58.595127 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 18:06:58 crc kubenswrapper[4853]: I1209 18:06:58.595843 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"695a908365ab3a75177fe8f8d8f182a968490547570450da1a3e1c38103c1f3e"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 18:06:58 crc kubenswrapper[4853]: I1209 18:06:58.595894 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://695a908365ab3a75177fe8f8d8f182a968490547570450da1a3e1c38103c1f3e" gracePeriod=600 Dec 09 18:06:59 crc kubenswrapper[4853]: I1209 18:06:59.399208 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="695a908365ab3a75177fe8f8d8f182a968490547570450da1a3e1c38103c1f3e" exitCode=0 Dec 09 18:06:59 crc kubenswrapper[4853]: I1209 18:06:59.399290 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"695a908365ab3a75177fe8f8d8f182a968490547570450da1a3e1c38103c1f3e"} Dec 09 18:06:59 crc kubenswrapper[4853]: I1209 18:06:59.399871 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6"} Dec 09 18:06:59 crc kubenswrapper[4853]: I1209 18:06:59.399927 4853 scope.go:117] "RemoveContainer" containerID="b4186818fb1c88aea414cc3e64e798775d1679de9e354b16ddec492ce73bdb36" Dec 09 18:07:01 crc kubenswrapper[4853]: E1209 18:07:01.570172 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:07:01 crc kubenswrapper[4853]: E1209 18:07:01.570172 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:07:13 crc kubenswrapper[4853]: E1209 18:07:13.578996 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:07:15 crc kubenswrapper[4853]: E1209 18:07:15.570889 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:07:28 crc kubenswrapper[4853]: E1209 18:07:28.573125 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:07:29 crc kubenswrapper[4853]: E1209 18:07:29.569472 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:07:43 crc kubenswrapper[4853]: E1209 18:07:43.576474 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:07:43 crc kubenswrapper[4853]: E1209 18:07:43.579898 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.038105 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66"] Dec 09 18:07:50 crc kubenswrapper[4853]: E1209 18:07:50.039079 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerName="extract-utilities" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.039094 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerName="extract-utilities" Dec 09 18:07:50 crc kubenswrapper[4853]: E1209 18:07:50.039117 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerName="extract-content" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.039123 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerName="extract-content" Dec 09 18:07:50 crc kubenswrapper[4853]: E1209 18:07:50.039162 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb81bc18-40f0-48b1-94a1-c0f4ca35e36c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.039171 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb81bc18-40f0-48b1-94a1-c0f4ca35e36c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 18:07:50 crc kubenswrapper[4853]: E1209 18:07:50.039205 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerName="registry-server" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.039210 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerName="registry-server" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.039415 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb81bc18-40f0-48b1-94a1-c0f4ca35e36c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.039435 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fda1fb-22bb-489f-a86c-8d50777f656d" containerName="registry-server" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.040218 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.043142 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.043249 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.043398 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.044473 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.068710 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66"] Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.160498 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d9t66\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.160734 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lddj9\" (UniqueName: \"kubernetes.io/projected/4f764ae7-2150-4081-9763-a0ef9ce1640f-kube-api-access-lddj9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d9t66\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.160799 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d9t66\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.263262 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lddj9\" (UniqueName: \"kubernetes.io/projected/4f764ae7-2150-4081-9763-a0ef9ce1640f-kube-api-access-lddj9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d9t66\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.263482 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d9t66\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.263745 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d9t66\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.269798 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d9t66\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.269820 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d9t66\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.283242 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lddj9\" (UniqueName: \"kubernetes.io/projected/4f764ae7-2150-4081-9763-a0ef9ce1640f-kube-api-access-lddj9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d9t66\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:50 crc kubenswrapper[4853]: I1209 18:07:50.376434 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:07:51 crc kubenswrapper[4853]: I1209 18:07:51.074168 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66"] Dec 09 18:07:52 crc kubenswrapper[4853]: I1209 18:07:52.018921 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" event={"ID":"4f764ae7-2150-4081-9763-a0ef9ce1640f","Type":"ContainerStarted","Data":"95dc41908d993d2377bbb15d5bf8d679b53fba2ade507158ffee06e2c60264f5"} Dec 09 18:07:53 crc kubenswrapper[4853]: I1209 18:07:53.032186 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" event={"ID":"4f764ae7-2150-4081-9763-a0ef9ce1640f","Type":"ContainerStarted","Data":"e18cd90c1066aedc9baa85b564bc41b094f1f1888201cec819b82f8ae046b5a4"} Dec 09 18:07:53 crc kubenswrapper[4853]: I1209 18:07:53.070400 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" podStartSLOduration=2.48040077 podStartE2EDuration="3.070381702s" podCreationTimestamp="2025-12-09 18:07:50 +0000 UTC" firstStartedPulling="2025-12-09 18:07:51.392257738 +0000 UTC m=+4298.326996920" lastFinishedPulling="2025-12-09 18:07:51.98223868 +0000 UTC m=+4298.916977852" observedRunningTime="2025-12-09 18:07:53.056295882 +0000 UTC m=+4299.991035064" watchObservedRunningTime="2025-12-09 18:07:53.070381702 +0000 UTC m=+4300.005120874" Dec 09 18:07:55 crc kubenswrapper[4853]: E1209 18:07:55.574236 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:07:57 crc kubenswrapper[4853]: E1209 18:07:57.571962 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:08:07 crc kubenswrapper[4853]: E1209 18:08:07.569406 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:08:10 crc kubenswrapper[4853]: E1209 18:08:10.571779 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:08:20 crc kubenswrapper[4853]: E1209 18:08:20.571144 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:08:25 crc kubenswrapper[4853]: E1209 18:08:25.569190 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:08:31 crc kubenswrapper[4853]: E1209 18:08:31.569782 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:08:40 crc kubenswrapper[4853]: E1209 18:08:40.570079 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:08:43 crc kubenswrapper[4853]: E1209 18:08:43.570937 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:08:52 crc kubenswrapper[4853]: E1209 18:08:52.570127 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:08:56 crc kubenswrapper[4853]: E1209 18:08:56.570703 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:08:58 crc kubenswrapper[4853]: I1209 18:08:58.593316 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:08:58 crc kubenswrapper[4853]: I1209 18:08:58.593707 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:09:06 crc kubenswrapper[4853]: E1209 18:09:06.570712 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:09:09 crc kubenswrapper[4853]: E1209 18:09:09.569195 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:09:20 crc kubenswrapper[4853]: E1209 18:09:20.573858 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:09:21 crc kubenswrapper[4853]: E1209 18:09:21.570224 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:09:28 crc kubenswrapper[4853]: I1209 18:09:28.592917 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:09:28 crc kubenswrapper[4853]: I1209 18:09:28.593522 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:09:30 crc kubenswrapper[4853]: I1209 18:09:30.812152 4853 scope.go:117] "RemoveContainer" containerID="bb30586f3bae22b19117ac08644104ebc12d5a813bb362b63a011766b088b18d" Dec 09 18:09:30 crc kubenswrapper[4853]: I1209 18:09:30.844993 4853 scope.go:117] "RemoveContainer" containerID="1685ecc626ada8f1e2fe98fff57ec2e4106798403806937c047667a34644d484" Dec 09 18:09:30 crc kubenswrapper[4853]: I1209 18:09:30.913026 4853 scope.go:117] "RemoveContainer" containerID="a7478fb88ab6e72f71d34f077601084b6ad40e0c5ad3cb9005998dc2b46f1865" Dec 09 18:09:34 crc kubenswrapper[4853]: E1209 18:09:34.569805 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:09:34 crc kubenswrapper[4853]: E1209 18:09:34.571029 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:09:46 crc kubenswrapper[4853]: E1209 18:09:46.569766 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:09:46 crc kubenswrapper[4853]: E1209 18:09:46.569804 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.241047 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmgs"] Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.250410 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.255919 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmgs"] Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.330968 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-utilities\") pod \"redhat-marketplace-hhmgs\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.331476 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgznf\" (UniqueName: \"kubernetes.io/projected/894b84a0-0f33-422b-8a22-377c16602a03-kube-api-access-rgznf\") pod \"redhat-marketplace-hhmgs\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.331590 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-catalog-content\") pod \"redhat-marketplace-hhmgs\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.434337 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-utilities\") pod \"redhat-marketplace-hhmgs\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.434501 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgznf\" (UniqueName: \"kubernetes.io/projected/894b84a0-0f33-422b-8a22-377c16602a03-kube-api-access-rgznf\") pod \"redhat-marketplace-hhmgs\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.434644 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-catalog-content\") pod \"redhat-marketplace-hhmgs\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.435362 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-catalog-content\") pod \"redhat-marketplace-hhmgs\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.435426 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-utilities\") pod \"redhat-marketplace-hhmgs\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.458920 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgznf\" (UniqueName: \"kubernetes.io/projected/894b84a0-0f33-422b-8a22-377c16602a03-kube-api-access-rgznf\") pod \"redhat-marketplace-hhmgs\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:49 crc kubenswrapper[4853]: I1209 18:09:49.577833 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:50 crc kubenswrapper[4853]: I1209 18:09:50.071534 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmgs"] Dec 09 18:09:50 crc kubenswrapper[4853]: W1209 18:09:50.074021 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod894b84a0_0f33_422b_8a22_377c16602a03.slice/crio-8978f72ffc069667fed1096aa083f6078df2dc6a8896f3ade183fe3252497955 WatchSource:0}: Error finding container 8978f72ffc069667fed1096aa083f6078df2dc6a8896f3ade183fe3252497955: Status 404 returned error can't find the container with id 8978f72ffc069667fed1096aa083f6078df2dc6a8896f3ade183fe3252497955 Dec 09 18:09:50 crc kubenswrapper[4853]: I1209 18:09:50.500569 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmgs" event={"ID":"894b84a0-0f33-422b-8a22-377c16602a03","Type":"ContainerStarted","Data":"8978f72ffc069667fed1096aa083f6078df2dc6a8896f3ade183fe3252497955"} Dec 09 18:09:51 crc kubenswrapper[4853]: I1209 18:09:51.515024 4853 generic.go:334] "Generic (PLEG): container finished" podID="894b84a0-0f33-422b-8a22-377c16602a03" containerID="1bf854849132462016030fae943bd7e50106895220ab8bf0bfc789a413472345" exitCode=0 Dec 09 18:09:51 crc kubenswrapper[4853]: I1209 18:09:51.515081 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmgs" event={"ID":"894b84a0-0f33-422b-8a22-377c16602a03","Type":"ContainerDied","Data":"1bf854849132462016030fae943bd7e50106895220ab8bf0bfc789a413472345"} Dec 09 18:09:52 crc kubenswrapper[4853]: I1209 18:09:52.535198 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmgs" event={"ID":"894b84a0-0f33-422b-8a22-377c16602a03","Type":"ContainerStarted","Data":"f471357728b609ec2b3f80ca69600d86b1a8110db96afc11bcb7070bcbc96125"} Dec 09 18:09:53 crc kubenswrapper[4853]: I1209 18:09:53.550735 4853 generic.go:334] "Generic (PLEG): container finished" podID="894b84a0-0f33-422b-8a22-377c16602a03" containerID="f471357728b609ec2b3f80ca69600d86b1a8110db96afc11bcb7070bcbc96125" exitCode=0 Dec 09 18:09:53 crc kubenswrapper[4853]: I1209 18:09:53.550784 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmgs" event={"ID":"894b84a0-0f33-422b-8a22-377c16602a03","Type":"ContainerDied","Data":"f471357728b609ec2b3f80ca69600d86b1a8110db96afc11bcb7070bcbc96125"} Dec 09 18:09:54 crc kubenswrapper[4853]: I1209 18:09:54.578972 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmgs" event={"ID":"894b84a0-0f33-422b-8a22-377c16602a03","Type":"ContainerStarted","Data":"a5a897f3b6835442ce49e980330f47edf7a0e9a9a986dba1e848dbab0a9d21d2"} Dec 09 18:09:54 crc kubenswrapper[4853]: I1209 18:09:54.611046 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hhmgs" podStartSLOduration=3.030721721 podStartE2EDuration="5.611026601s" podCreationTimestamp="2025-12-09 18:09:49 +0000 UTC" firstStartedPulling="2025-12-09 18:09:51.518202261 +0000 UTC m=+4418.452941463" lastFinishedPulling="2025-12-09 18:09:54.098507121 +0000 UTC m=+4421.033246343" observedRunningTime="2025-12-09 18:09:54.600197824 +0000 UTC m=+4421.534937026" watchObservedRunningTime="2025-12-09 18:09:54.611026601 +0000 UTC m=+4421.545765793" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.032948 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zhrsr"] Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.036592 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.052291 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhrsr"] Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.135728 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c26d927d-cf9a-4030-8b51-5a02cc9688ad-utilities\") pod \"community-operators-zhrsr\" (UID: \"c26d927d-cf9a-4030-8b51-5a02cc9688ad\") " pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.135828 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c26d927d-cf9a-4030-8b51-5a02cc9688ad-catalog-content\") pod \"community-operators-zhrsr\" (UID: \"c26d927d-cf9a-4030-8b51-5a02cc9688ad\") " pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.135889 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tkf\" (UniqueName: \"kubernetes.io/projected/c26d927d-cf9a-4030-8b51-5a02cc9688ad-kube-api-access-p5tkf\") pod \"community-operators-zhrsr\" (UID: \"c26d927d-cf9a-4030-8b51-5a02cc9688ad\") " pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.238376 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c26d927d-cf9a-4030-8b51-5a02cc9688ad-utilities\") pod \"community-operators-zhrsr\" (UID: \"c26d927d-cf9a-4030-8b51-5a02cc9688ad\") " pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.238515 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c26d927d-cf9a-4030-8b51-5a02cc9688ad-catalog-content\") pod \"community-operators-zhrsr\" (UID: \"c26d927d-cf9a-4030-8b51-5a02cc9688ad\") " pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.238611 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5tkf\" (UniqueName: \"kubernetes.io/projected/c26d927d-cf9a-4030-8b51-5a02cc9688ad-kube-api-access-p5tkf\") pod \"community-operators-zhrsr\" (UID: \"c26d927d-cf9a-4030-8b51-5a02cc9688ad\") " pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.238994 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c26d927d-cf9a-4030-8b51-5a02cc9688ad-utilities\") pod \"community-operators-zhrsr\" (UID: \"c26d927d-cf9a-4030-8b51-5a02cc9688ad\") " pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.239036 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c26d927d-cf9a-4030-8b51-5a02cc9688ad-catalog-content\") pod \"community-operators-zhrsr\" (UID: \"c26d927d-cf9a-4030-8b51-5a02cc9688ad\") " pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.258788 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5tkf\" (UniqueName: \"kubernetes.io/projected/c26d927d-cf9a-4030-8b51-5a02cc9688ad-kube-api-access-p5tkf\") pod \"community-operators-zhrsr\" (UID: \"c26d927d-cf9a-4030-8b51-5a02cc9688ad\") " pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:57 crc kubenswrapper[4853]: I1209 18:09:57.366043 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:09:58 crc kubenswrapper[4853]: I1209 18:09:58.048482 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhrsr"] Dec 09 18:09:58 crc kubenswrapper[4853]: W1209 18:09:58.057723 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc26d927d_cf9a_4030_8b51_5a02cc9688ad.slice/crio-0bc6b2efc4e832988e6ac8a27997ec2609be9d5bd4f42f7c520ffabb2ee234c8 WatchSource:0}: Error finding container 0bc6b2efc4e832988e6ac8a27997ec2609be9d5bd4f42f7c520ffabb2ee234c8: Status 404 returned error can't find the container with id 0bc6b2efc4e832988e6ac8a27997ec2609be9d5bd4f42f7c520ffabb2ee234c8 Dec 09 18:09:58 crc kubenswrapper[4853]: I1209 18:09:58.592741 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:09:58 crc kubenswrapper[4853]: I1209 18:09:58.593128 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:09:58 crc kubenswrapper[4853]: I1209 18:09:58.593190 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 18:09:58 crc kubenswrapper[4853]: I1209 18:09:58.594355 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 18:09:58 crc kubenswrapper[4853]: I1209 18:09:58.594462 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" gracePeriod=600 Dec 09 18:09:58 crc kubenswrapper[4853]: I1209 18:09:58.631716 4853 generic.go:334] "Generic (PLEG): container finished" podID="c26d927d-cf9a-4030-8b51-5a02cc9688ad" containerID="4acc0abe83470fd07c1544c126956540dc4fe1114452b27f5b9d596c4f8670a4" exitCode=0 Dec 09 18:09:58 crc kubenswrapper[4853]: I1209 18:09:58.632009 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrsr" event={"ID":"c26d927d-cf9a-4030-8b51-5a02cc9688ad","Type":"ContainerDied","Data":"4acc0abe83470fd07c1544c126956540dc4fe1114452b27f5b9d596c4f8670a4"} Dec 09 18:09:58 crc kubenswrapper[4853]: I1209 18:09:58.632137 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrsr" event={"ID":"c26d927d-cf9a-4030-8b51-5a02cc9688ad","Type":"ContainerStarted","Data":"0bc6b2efc4e832988e6ac8a27997ec2609be9d5bd4f42f7c520ffabb2ee234c8"} Dec 09 18:09:58 crc kubenswrapper[4853]: E1209 18:09:58.720676 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:09:59 crc kubenswrapper[4853]: I1209 18:09:59.578434 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:59 crc kubenswrapper[4853]: I1209 18:09:59.578830 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:09:59 crc kubenswrapper[4853]: I1209 18:09:59.646273 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" exitCode=0 Dec 09 18:09:59 crc kubenswrapper[4853]: I1209 18:09:59.646332 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6"} Dec 09 18:09:59 crc kubenswrapper[4853]: I1209 18:09:59.646442 4853 scope.go:117] "RemoveContainer" containerID="695a908365ab3a75177fe8f8d8f182a968490547570450da1a3e1c38103c1f3e" Dec 09 18:09:59 crc kubenswrapper[4853]: I1209 18:09:59.647956 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:09:59 crc kubenswrapper[4853]: E1209 18:09:59.648484 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:10:00 crc kubenswrapper[4853]: I1209 18:10:00.025473 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:10:00 crc kubenswrapper[4853]: I1209 18:10:00.084541 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:10:00 crc kubenswrapper[4853]: E1209 18:10:00.569250 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:10:01 crc kubenswrapper[4853]: E1209 18:10:01.569585 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:10:01 crc kubenswrapper[4853]: I1209 18:10:01.771203 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmgs"] Dec 09 18:10:01 crc kubenswrapper[4853]: I1209 18:10:01.771410 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hhmgs" podUID="894b84a0-0f33-422b-8a22-377c16602a03" containerName="registry-server" containerID="cri-o://a5a897f3b6835442ce49e980330f47edf7a0e9a9a986dba1e848dbab0a9d21d2" gracePeriod=2 Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.686442 4853 generic.go:334] "Generic (PLEG): container finished" podID="894b84a0-0f33-422b-8a22-377c16602a03" containerID="a5a897f3b6835442ce49e980330f47edf7a0e9a9a986dba1e848dbab0a9d21d2" exitCode=0 Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.686528 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmgs" event={"ID":"894b84a0-0f33-422b-8a22-377c16602a03","Type":"ContainerDied","Data":"a5a897f3b6835442ce49e980330f47edf7a0e9a9a986dba1e848dbab0a9d21d2"} Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.687829 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hhmgs" event={"ID":"894b84a0-0f33-422b-8a22-377c16602a03","Type":"ContainerDied","Data":"8978f72ffc069667fed1096aa083f6078df2dc6a8896f3ade183fe3252497955"} Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.687844 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8978f72ffc069667fed1096aa083f6078df2dc6a8896f3ade183fe3252497955" Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.690285 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrsr" event={"ID":"c26d927d-cf9a-4030-8b51-5a02cc9688ad","Type":"ContainerStarted","Data":"5c501864b56749978892b2be8c70db40817c343f6debbbe82479fa422f479b7e"} Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.784443 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.853682 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgznf\" (UniqueName: \"kubernetes.io/projected/894b84a0-0f33-422b-8a22-377c16602a03-kube-api-access-rgznf\") pod \"894b84a0-0f33-422b-8a22-377c16602a03\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.853932 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-utilities\") pod \"894b84a0-0f33-422b-8a22-377c16602a03\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.854116 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-catalog-content\") pod \"894b84a0-0f33-422b-8a22-377c16602a03\" (UID: \"894b84a0-0f33-422b-8a22-377c16602a03\") " Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.857593 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-utilities" (OuterVolumeSpecName: "utilities") pod "894b84a0-0f33-422b-8a22-377c16602a03" (UID: "894b84a0-0f33-422b-8a22-377c16602a03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.866748 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/894b84a0-0f33-422b-8a22-377c16602a03-kube-api-access-rgznf" (OuterVolumeSpecName: "kube-api-access-rgznf") pod "894b84a0-0f33-422b-8a22-377c16602a03" (UID: "894b84a0-0f33-422b-8a22-377c16602a03"). InnerVolumeSpecName "kube-api-access-rgznf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.901463 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "894b84a0-0f33-422b-8a22-377c16602a03" (UID: "894b84a0-0f33-422b-8a22-377c16602a03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.957237 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgznf\" (UniqueName: \"kubernetes.io/projected/894b84a0-0f33-422b-8a22-377c16602a03-kube-api-access-rgznf\") on node \"crc\" DevicePath \"\"" Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.957278 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:10:02 crc kubenswrapper[4853]: I1209 18:10:02.957292 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/894b84a0-0f33-422b-8a22-377c16602a03-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:10:03 crc kubenswrapper[4853]: I1209 18:10:03.709887 4853 generic.go:334] "Generic (PLEG): container finished" podID="c26d927d-cf9a-4030-8b51-5a02cc9688ad" containerID="5c501864b56749978892b2be8c70db40817c343f6debbbe82479fa422f479b7e" exitCode=0 Dec 09 18:10:03 crc kubenswrapper[4853]: I1209 18:10:03.709956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrsr" event={"ID":"c26d927d-cf9a-4030-8b51-5a02cc9688ad","Type":"ContainerDied","Data":"5c501864b56749978892b2be8c70db40817c343f6debbbe82479fa422f479b7e"} Dec 09 18:10:03 crc kubenswrapper[4853]: I1209 18:10:03.709994 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hhmgs" Dec 09 18:10:03 crc kubenswrapper[4853]: I1209 18:10:03.776362 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmgs"] Dec 09 18:10:03 crc kubenswrapper[4853]: I1209 18:10:03.792864 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hhmgs"] Dec 09 18:10:05 crc kubenswrapper[4853]: I1209 18:10:05.579078 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="894b84a0-0f33-422b-8a22-377c16602a03" path="/var/lib/kubelet/pods/894b84a0-0f33-422b-8a22-377c16602a03/volumes" Dec 09 18:10:05 crc kubenswrapper[4853]: I1209 18:10:05.732456 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrsr" event={"ID":"c26d927d-cf9a-4030-8b51-5a02cc9688ad","Type":"ContainerStarted","Data":"2a9cb8a23e02f0b0bee874ae39a57813fd9340561fa576efc8fb282acd0ab119"} Dec 09 18:10:05 crc kubenswrapper[4853]: I1209 18:10:05.763656 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zhrsr" podStartSLOduration=2.822666476 podStartE2EDuration="8.763636617s" podCreationTimestamp="2025-12-09 18:09:57 +0000 UTC" firstStartedPulling="2025-12-09 18:09:58.634148646 +0000 UTC m=+4425.568887828" lastFinishedPulling="2025-12-09 18:10:04.575118787 +0000 UTC m=+4431.509857969" observedRunningTime="2025-12-09 18:10:05.755170731 +0000 UTC m=+4432.689909953" watchObservedRunningTime="2025-12-09 18:10:05.763636617 +0000 UTC m=+4432.698375799" Dec 09 18:10:07 crc kubenswrapper[4853]: I1209 18:10:07.367223 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:10:07 crc kubenswrapper[4853]: I1209 18:10:07.368032 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:10:07 crc kubenswrapper[4853]: I1209 18:10:07.458541 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:10:11 crc kubenswrapper[4853]: I1209 18:10:11.570062 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:10:11 crc kubenswrapper[4853]: E1209 18:10:11.570834 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:10:12 crc kubenswrapper[4853]: E1209 18:10:12.571731 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:10:16 crc kubenswrapper[4853]: E1209 18:10:16.569921 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:10:17 crc kubenswrapper[4853]: I1209 18:10:17.451223 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zhrsr" Dec 09 18:10:17 crc kubenswrapper[4853]: I1209 18:10:17.532280 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhrsr"] Dec 09 18:10:17 crc kubenswrapper[4853]: I1209 18:10:17.586818 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-462tl"] Dec 09 18:10:17 crc kubenswrapper[4853]: I1209 18:10:17.587055 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-462tl" podUID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerName="registry-server" containerID="cri-o://d20d98d200385aaf6748fb9b0a049519c89c999b13941b8ea6b4b03bab839d7f" gracePeriod=2 Dec 09 18:10:17 crc kubenswrapper[4853]: I1209 18:10:17.886052 4853 generic.go:334] "Generic (PLEG): container finished" podID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerID="d20d98d200385aaf6748fb9b0a049519c89c999b13941b8ea6b4b03bab839d7f" exitCode=0 Dec 09 18:10:17 crc kubenswrapper[4853]: I1209 18:10:17.886946 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-462tl" event={"ID":"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce","Type":"ContainerDied","Data":"d20d98d200385aaf6748fb9b0a049519c89c999b13941b8ea6b4b03bab839d7f"} Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.107879 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-462tl" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.156867 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-catalog-content\") pod \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.157384 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2dpp\" (UniqueName: \"kubernetes.io/projected/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-kube-api-access-s2dpp\") pod \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.157686 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-utilities\") pod \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\" (UID: \"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce\") " Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.159232 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-utilities" (OuterVolumeSpecName: "utilities") pod "cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" (UID: "cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.179836 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-kube-api-access-s2dpp" (OuterVolumeSpecName: "kube-api-access-s2dpp") pod "cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" (UID: "cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce"). InnerVolumeSpecName "kube-api-access-s2dpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.220081 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" (UID: "cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.261307 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.261334 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.261343 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2dpp\" (UniqueName: \"kubernetes.io/projected/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce-kube-api-access-s2dpp\") on node \"crc\" DevicePath \"\"" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.902669 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-462tl" event={"ID":"cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce","Type":"ContainerDied","Data":"257d9fa0bae13d9775c3d503dfb14f2fc006eafe1e9ef6209c8e232509bb3ac6"} Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.903004 4853 scope.go:117] "RemoveContainer" containerID="d20d98d200385aaf6748fb9b0a049519c89c999b13941b8ea6b4b03bab839d7f" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.902800 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-462tl" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.935530 4853 scope.go:117] "RemoveContainer" containerID="e602bfa529721ade63d780bbacedeaf60435872f2dd7a999ac4393d06039c87b" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.945151 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-462tl"] Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.966616 4853 scope.go:117] "RemoveContainer" containerID="8e6d7a050b5a265944bdd4fc3c85ef449a2d5b0b57cc25579f03b2d894bf06a1" Dec 09 18:10:18 crc kubenswrapper[4853]: I1209 18:10:18.993228 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-462tl"] Dec 09 18:10:19 crc kubenswrapper[4853]: I1209 18:10:19.583473 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" path="/var/lib/kubelet/pods/cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce/volumes" Dec 09 18:10:24 crc kubenswrapper[4853]: I1209 18:10:24.567456 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:10:24 crc kubenswrapper[4853]: E1209 18:10:24.568404 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:10:25 crc kubenswrapper[4853]: E1209 18:10:25.569488 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:10:30 crc kubenswrapper[4853]: E1209 18:10:30.570171 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:10:39 crc kubenswrapper[4853]: I1209 18:10:39.568923 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:10:39 crc kubenswrapper[4853]: E1209 18:10:39.570167 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:10:40 crc kubenswrapper[4853]: E1209 18:10:40.569479 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:10:42 crc kubenswrapper[4853]: E1209 18:10:42.570864 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:10:51 crc kubenswrapper[4853]: I1209 18:10:51.568024 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:10:51 crc kubenswrapper[4853]: E1209 18:10:51.569076 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:10:55 crc kubenswrapper[4853]: E1209 18:10:55.570313 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:10:55 crc kubenswrapper[4853]: E1209 18:10:55.570750 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:11:04 crc kubenswrapper[4853]: I1209 18:11:04.568456 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:11:04 crc kubenswrapper[4853]: E1209 18:11:04.569562 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:11:07 crc kubenswrapper[4853]: E1209 18:11:07.569555 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:11:09 crc kubenswrapper[4853]: E1209 18:11:09.569194 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:11:19 crc kubenswrapper[4853]: I1209 18:11:19.567502 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:11:19 crc kubenswrapper[4853]: E1209 18:11:19.569075 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:11:20 crc kubenswrapper[4853]: E1209 18:11:20.570749 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:11:22 crc kubenswrapper[4853]: E1209 18:11:22.568902 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:11:33 crc kubenswrapper[4853]: E1209 18:11:33.580138 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:11:33 crc kubenswrapper[4853]: E1209 18:11:33.580139 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:11:34 crc kubenswrapper[4853]: I1209 18:11:34.567633 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:11:34 crc kubenswrapper[4853]: E1209 18:11:34.568553 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:11:46 crc kubenswrapper[4853]: I1209 18:11:46.567745 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:11:46 crc kubenswrapper[4853]: E1209 18:11:46.568785 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:11:48 crc kubenswrapper[4853]: I1209 18:11:48.570269 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 18:11:48 crc kubenswrapper[4853]: E1209 18:11:48.660995 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:11:48 crc kubenswrapper[4853]: E1209 18:11:48.661077 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:11:48 crc kubenswrapper[4853]: E1209 18:11:48.661253 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:11:48 crc kubenswrapper[4853]: E1209 18:11:48.662511 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:11:48 crc kubenswrapper[4853]: E1209 18:11:48.703854 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:11:48 crc kubenswrapper[4853]: E1209 18:11:48.703924 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:11:48 crc kubenswrapper[4853]: E1209 18:11:48.704081 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:11:48 crc kubenswrapper[4853]: E1209 18:11:48.705353 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:11:59 crc kubenswrapper[4853]: I1209 18:11:59.568309 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:11:59 crc kubenswrapper[4853]: E1209 18:11:59.569266 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:11:59 crc kubenswrapper[4853]: E1209 18:11:59.569299 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:12:01 crc kubenswrapper[4853]: E1209 18:12:01.569808 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:12:10 crc kubenswrapper[4853]: I1209 18:12:10.567742 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:12:10 crc kubenswrapper[4853]: E1209 18:12:10.568823 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:12:12 crc kubenswrapper[4853]: E1209 18:12:12.570032 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:12:12 crc kubenswrapper[4853]: E1209 18:12:12.570315 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.122820 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-42bmk"] Dec 09 18:12:14 crc kubenswrapper[4853]: E1209 18:12:14.123658 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="894b84a0-0f33-422b-8a22-377c16602a03" containerName="extract-content" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.123675 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="894b84a0-0f33-422b-8a22-377c16602a03" containerName="extract-content" Dec 09 18:12:14 crc kubenswrapper[4853]: E1209 18:12:14.123700 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerName="registry-server" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.123707 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerName="registry-server" Dec 09 18:12:14 crc kubenswrapper[4853]: E1209 18:12:14.123722 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerName="extract-content" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.123729 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerName="extract-content" Dec 09 18:12:14 crc kubenswrapper[4853]: E1209 18:12:14.123749 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="894b84a0-0f33-422b-8a22-377c16602a03" containerName="registry-server" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.123756 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="894b84a0-0f33-422b-8a22-377c16602a03" containerName="registry-server" Dec 09 18:12:14 crc kubenswrapper[4853]: E1209 18:12:14.123789 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerName="extract-utilities" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.123796 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerName="extract-utilities" Dec 09 18:12:14 crc kubenswrapper[4853]: E1209 18:12:14.123821 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="894b84a0-0f33-422b-8a22-377c16602a03" containerName="extract-utilities" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.123830 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="894b84a0-0f33-422b-8a22-377c16602a03" containerName="extract-utilities" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.124140 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb6ed4cb-a6be-4bdc-ad3e-c21dea9775ce" containerName="registry-server" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.124181 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="894b84a0-0f33-422b-8a22-377c16602a03" containerName="registry-server" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.126238 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.140819 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42bmk"] Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.244775 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxtw\" (UniqueName: \"kubernetes.io/projected/9a062411-a0d1-4c26-bb73-204fc6c46cf1-kube-api-access-cwxtw\") pod \"certified-operators-42bmk\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.244970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-utilities\") pod \"certified-operators-42bmk\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.245012 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-catalog-content\") pod \"certified-operators-42bmk\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.346765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-utilities\") pod \"certified-operators-42bmk\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.346818 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-catalog-content\") pod \"certified-operators-42bmk\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.346929 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxtw\" (UniqueName: \"kubernetes.io/projected/9a062411-a0d1-4c26-bb73-204fc6c46cf1-kube-api-access-cwxtw\") pod \"certified-operators-42bmk\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.347185 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-utilities\") pod \"certified-operators-42bmk\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.347741 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-catalog-content\") pod \"certified-operators-42bmk\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.369207 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxtw\" (UniqueName: \"kubernetes.io/projected/9a062411-a0d1-4c26-bb73-204fc6c46cf1-kube-api-access-cwxtw\") pod \"certified-operators-42bmk\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:14 crc kubenswrapper[4853]: I1209 18:12:14.465457 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:15 crc kubenswrapper[4853]: I1209 18:12:15.008610 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-42bmk"] Dec 09 18:12:15 crc kubenswrapper[4853]: I1209 18:12:15.342560 4853 generic.go:334] "Generic (PLEG): container finished" podID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerID="40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd" exitCode=0 Dec 09 18:12:15 crc kubenswrapper[4853]: I1209 18:12:15.342715 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42bmk" event={"ID":"9a062411-a0d1-4c26-bb73-204fc6c46cf1","Type":"ContainerDied","Data":"40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd"} Dec 09 18:12:15 crc kubenswrapper[4853]: I1209 18:12:15.342982 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42bmk" event={"ID":"9a062411-a0d1-4c26-bb73-204fc6c46cf1","Type":"ContainerStarted","Data":"dc29d31425d8883d48c60f8e20ddb3b6bced757f5f02566a568be7a692b02aa2"} Dec 09 18:12:17 crc kubenswrapper[4853]: I1209 18:12:17.421174 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42bmk" event={"ID":"9a062411-a0d1-4c26-bb73-204fc6c46cf1","Type":"ContainerStarted","Data":"bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab"} Dec 09 18:12:18 crc kubenswrapper[4853]: I1209 18:12:18.433275 4853 generic.go:334] "Generic (PLEG): container finished" podID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerID="bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab" exitCode=0 Dec 09 18:12:18 crc kubenswrapper[4853]: I1209 18:12:18.433339 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42bmk" event={"ID":"9a062411-a0d1-4c26-bb73-204fc6c46cf1","Type":"ContainerDied","Data":"bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab"} Dec 09 18:12:19 crc kubenswrapper[4853]: I1209 18:12:19.446383 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42bmk" event={"ID":"9a062411-a0d1-4c26-bb73-204fc6c46cf1","Type":"ContainerStarted","Data":"d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f"} Dec 09 18:12:19 crc kubenswrapper[4853]: I1209 18:12:19.468208 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-42bmk" podStartSLOduration=1.812803931 podStartE2EDuration="5.468185634s" podCreationTimestamp="2025-12-09 18:12:14 +0000 UTC" firstStartedPulling="2025-12-09 18:12:15.345279048 +0000 UTC m=+4562.280018230" lastFinishedPulling="2025-12-09 18:12:19.000660741 +0000 UTC m=+4565.935399933" observedRunningTime="2025-12-09 18:12:19.462085178 +0000 UTC m=+4566.396824380" watchObservedRunningTime="2025-12-09 18:12:19.468185634 +0000 UTC m=+4566.402924816" Dec 09 18:12:23 crc kubenswrapper[4853]: I1209 18:12:23.581867 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:12:23 crc kubenswrapper[4853]: E1209 18:12:23.582814 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:12:24 crc kubenswrapper[4853]: I1209 18:12:24.466440 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:24 crc kubenswrapper[4853]: I1209 18:12:24.466655 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:24 crc kubenswrapper[4853]: I1209 18:12:24.535359 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:24 crc kubenswrapper[4853]: E1209 18:12:24.569087 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:12:24 crc kubenswrapper[4853]: I1209 18:12:24.604357 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:24 crc kubenswrapper[4853]: I1209 18:12:24.777383 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42bmk"] Dec 09 18:12:26 crc kubenswrapper[4853]: I1209 18:12:26.550745 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-42bmk" podUID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerName="registry-server" containerID="cri-o://d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f" gracePeriod=2 Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.119808 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.255065 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-utilities\") pod \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.255346 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-catalog-content\") pod \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.255409 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwxtw\" (UniqueName: \"kubernetes.io/projected/9a062411-a0d1-4c26-bb73-204fc6c46cf1-kube-api-access-cwxtw\") pod \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\" (UID: \"9a062411-a0d1-4c26-bb73-204fc6c46cf1\") " Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.256322 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-utilities" (OuterVolumeSpecName: "utilities") pod "9a062411-a0d1-4c26-bb73-204fc6c46cf1" (UID: "9a062411-a0d1-4c26-bb73-204fc6c46cf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.265011 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a062411-a0d1-4c26-bb73-204fc6c46cf1-kube-api-access-cwxtw" (OuterVolumeSpecName: "kube-api-access-cwxtw") pod "9a062411-a0d1-4c26-bb73-204fc6c46cf1" (UID: "9a062411-a0d1-4c26-bb73-204fc6c46cf1"). InnerVolumeSpecName "kube-api-access-cwxtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.341004 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a062411-a0d1-4c26-bb73-204fc6c46cf1" (UID: "9a062411-a0d1-4c26-bb73-204fc6c46cf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.358847 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.358895 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwxtw\" (UniqueName: \"kubernetes.io/projected/9a062411-a0d1-4c26-bb73-204fc6c46cf1-kube-api-access-cwxtw\") on node \"crc\" DevicePath \"\"" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.358970 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a062411-a0d1-4c26-bb73-204fc6c46cf1-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.562691 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-42bmk" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.562714 4853 generic.go:334] "Generic (PLEG): container finished" podID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerID="d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f" exitCode=0 Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.562697 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42bmk" event={"ID":"9a062411-a0d1-4c26-bb73-204fc6c46cf1","Type":"ContainerDied","Data":"d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f"} Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.562821 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-42bmk" event={"ID":"9a062411-a0d1-4c26-bb73-204fc6c46cf1","Type":"ContainerDied","Data":"dc29d31425d8883d48c60f8e20ddb3b6bced757f5f02566a568be7a692b02aa2"} Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.562840 4853 scope.go:117] "RemoveContainer" containerID="d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f" Dec 09 18:12:27 crc kubenswrapper[4853]: E1209 18:12:27.568407 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.597741 4853 scope.go:117] "RemoveContainer" containerID="bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.604434 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-42bmk"] Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.618504 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-42bmk"] Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.637956 4853 scope.go:117] "RemoveContainer" containerID="40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.686792 4853 scope.go:117] "RemoveContainer" containerID="d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f" Dec 09 18:12:27 crc kubenswrapper[4853]: E1209 18:12:27.687199 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f\": container with ID starting with d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f not found: ID does not exist" containerID="d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.687229 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f"} err="failed to get container status \"d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f\": rpc error: code = NotFound desc = could not find container \"d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f\": container with ID starting with d8423d19b8e16acc8723de293499339e6d8664ba1d3b8ac4d8e00193c588df2f not found: ID does not exist" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.687251 4853 scope.go:117] "RemoveContainer" containerID="bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab" Dec 09 18:12:27 crc kubenswrapper[4853]: E1209 18:12:27.687680 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab\": container with ID starting with bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab not found: ID does not exist" containerID="bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.687701 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab"} err="failed to get container status \"bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab\": rpc error: code = NotFound desc = could not find container \"bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab\": container with ID starting with bfb91e6cda296557fb0356834bc26bc9f5b23d5e0aa5f583212656e65b5486ab not found: ID does not exist" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.687714 4853 scope.go:117] "RemoveContainer" containerID="40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd" Dec 09 18:12:27 crc kubenswrapper[4853]: E1209 18:12:27.689023 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd\": container with ID starting with 40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd not found: ID does not exist" containerID="40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd" Dec 09 18:12:27 crc kubenswrapper[4853]: I1209 18:12:27.689045 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd"} err="failed to get container status \"40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd\": rpc error: code = NotFound desc = could not find container \"40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd\": container with ID starting with 40b5180051cd41287ceb2d998da44d9aaa4e959e2126670187d88d3310fbf3cd not found: ID does not exist" Dec 09 18:12:29 crc kubenswrapper[4853]: I1209 18:12:29.583661 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" path="/var/lib/kubelet/pods/9a062411-a0d1-4c26-bb73-204fc6c46cf1/volumes" Dec 09 18:12:34 crc kubenswrapper[4853]: I1209 18:12:34.573106 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:12:34 crc kubenswrapper[4853]: E1209 18:12:34.574684 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:12:36 crc kubenswrapper[4853]: E1209 18:12:36.569266 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:12:41 crc kubenswrapper[4853]: E1209 18:12:41.569786 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:12:47 crc kubenswrapper[4853]: E1209 18:12:47.569986 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:12:49 crc kubenswrapper[4853]: I1209 18:12:49.567328 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:12:49 crc kubenswrapper[4853]: E1209 18:12:49.568390 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:12:52 crc kubenswrapper[4853]: E1209 18:12:52.573519 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.066271 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gsjdn"] Dec 09 18:12:55 crc kubenswrapper[4853]: E1209 18:12:55.071007 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerName="registry-server" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.071032 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerName="registry-server" Dec 09 18:12:55 crc kubenswrapper[4853]: E1209 18:12:55.071062 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerName="extract-content" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.071070 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerName="extract-content" Dec 09 18:12:55 crc kubenswrapper[4853]: E1209 18:12:55.071107 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerName="extract-utilities" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.071117 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerName="extract-utilities" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.071640 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a062411-a0d1-4c26-bb73-204fc6c46cf1" containerName="registry-server" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.073893 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.084939 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gsjdn"] Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.211776 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn4jd\" (UniqueName: \"kubernetes.io/projected/b2098ab3-3e17-48bb-af56-8043aefb9ae6-kube-api-access-hn4jd\") pod \"redhat-operators-gsjdn\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.211885 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-catalog-content\") pod \"redhat-operators-gsjdn\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.211946 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-utilities\") pod \"redhat-operators-gsjdn\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.313977 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn4jd\" (UniqueName: \"kubernetes.io/projected/b2098ab3-3e17-48bb-af56-8043aefb9ae6-kube-api-access-hn4jd\") pod \"redhat-operators-gsjdn\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.314090 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-catalog-content\") pod \"redhat-operators-gsjdn\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.314184 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-utilities\") pod \"redhat-operators-gsjdn\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.314780 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-utilities\") pod \"redhat-operators-gsjdn\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.314832 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-catalog-content\") pod \"redhat-operators-gsjdn\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.336585 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn4jd\" (UniqueName: \"kubernetes.io/projected/b2098ab3-3e17-48bb-af56-8043aefb9ae6-kube-api-access-hn4jd\") pod \"redhat-operators-gsjdn\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.403161 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:12:55 crc kubenswrapper[4853]: I1209 18:12:55.973931 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gsjdn"] Dec 09 18:12:56 crc kubenswrapper[4853]: I1209 18:12:56.905716 4853 generic.go:334] "Generic (PLEG): container finished" podID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerID="6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e" exitCode=0 Dec 09 18:12:56 crc kubenswrapper[4853]: I1209 18:12:56.905766 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gsjdn" event={"ID":"b2098ab3-3e17-48bb-af56-8043aefb9ae6","Type":"ContainerDied","Data":"6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e"} Dec 09 18:12:56 crc kubenswrapper[4853]: I1209 18:12:56.905999 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gsjdn" event={"ID":"b2098ab3-3e17-48bb-af56-8043aefb9ae6","Type":"ContainerStarted","Data":"924984157d9495709f547d1ccc8fa9414c710db8cb722d3baeb72db1320a1771"} Dec 09 18:12:58 crc kubenswrapper[4853]: I1209 18:12:58.935225 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gsjdn" event={"ID":"b2098ab3-3e17-48bb-af56-8043aefb9ae6","Type":"ContainerStarted","Data":"7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109"} Dec 09 18:13:02 crc kubenswrapper[4853]: I1209 18:13:02.568519 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:13:02 crc kubenswrapper[4853]: E1209 18:13:02.569335 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:13:02 crc kubenswrapper[4853]: E1209 18:13:02.571531 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:13:03 crc kubenswrapper[4853]: I1209 18:13:03.009764 4853 generic.go:334] "Generic (PLEG): container finished" podID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerID="7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109" exitCode=0 Dec 09 18:13:03 crc kubenswrapper[4853]: I1209 18:13:03.009811 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gsjdn" event={"ID":"b2098ab3-3e17-48bb-af56-8043aefb9ae6","Type":"ContainerDied","Data":"7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109"} Dec 09 18:13:04 crc kubenswrapper[4853]: I1209 18:13:04.025272 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gsjdn" event={"ID":"b2098ab3-3e17-48bb-af56-8043aefb9ae6","Type":"ContainerStarted","Data":"c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d"} Dec 09 18:13:04 crc kubenswrapper[4853]: I1209 18:13:04.050632 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gsjdn" podStartSLOduration=2.2463489 podStartE2EDuration="9.050609402s" podCreationTimestamp="2025-12-09 18:12:55 +0000 UTC" firstStartedPulling="2025-12-09 18:12:56.911407329 +0000 UTC m=+4603.846146511" lastFinishedPulling="2025-12-09 18:13:03.715667831 +0000 UTC m=+4610.650407013" observedRunningTime="2025-12-09 18:13:04.048817682 +0000 UTC m=+4610.983556864" watchObservedRunningTime="2025-12-09 18:13:04.050609402 +0000 UTC m=+4610.985348594" Dec 09 18:13:05 crc kubenswrapper[4853]: I1209 18:13:05.404666 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:13:05 crc kubenswrapper[4853]: I1209 18:13:05.404722 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:13:06 crc kubenswrapper[4853]: I1209 18:13:06.766384 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gsjdn" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerName="registry-server" probeResult="failure" output=< Dec 09 18:13:06 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 18:13:06 crc kubenswrapper[4853]: > Dec 09 18:13:07 crc kubenswrapper[4853]: E1209 18:13:07.575304 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:13:13 crc kubenswrapper[4853]: I1209 18:13:13.579626 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:13:13 crc kubenswrapper[4853]: E1209 18:13:13.580932 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:13:15 crc kubenswrapper[4853]: I1209 18:13:15.481435 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:13:15 crc kubenswrapper[4853]: I1209 18:13:15.535928 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:13:15 crc kubenswrapper[4853]: I1209 18:13:15.721518 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gsjdn"] Dec 09 18:13:16 crc kubenswrapper[4853]: E1209 18:13:16.569262 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.164967 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gsjdn" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerName="registry-server" containerID="cri-o://c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d" gracePeriod=2 Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.687523 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.738005 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-utilities\") pod \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.738358 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-catalog-content\") pod \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.738495 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn4jd\" (UniqueName: \"kubernetes.io/projected/b2098ab3-3e17-48bb-af56-8043aefb9ae6-kube-api-access-hn4jd\") pod \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\" (UID: \"b2098ab3-3e17-48bb-af56-8043aefb9ae6\") " Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.739222 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-utilities" (OuterVolumeSpecName: "utilities") pod "b2098ab3-3e17-48bb-af56-8043aefb9ae6" (UID: "b2098ab3-3e17-48bb-af56-8043aefb9ae6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.747670 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2098ab3-3e17-48bb-af56-8043aefb9ae6-kube-api-access-hn4jd" (OuterVolumeSpecName: "kube-api-access-hn4jd") pod "b2098ab3-3e17-48bb-af56-8043aefb9ae6" (UID: "b2098ab3-3e17-48bb-af56-8043aefb9ae6"). InnerVolumeSpecName "kube-api-access-hn4jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.843146 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn4jd\" (UniqueName: \"kubernetes.io/projected/b2098ab3-3e17-48bb-af56-8043aefb9ae6-kube-api-access-hn4jd\") on node \"crc\" DevicePath \"\"" Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.843176 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.847760 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2098ab3-3e17-48bb-af56-8043aefb9ae6" (UID: "b2098ab3-3e17-48bb-af56-8043aefb9ae6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:13:17 crc kubenswrapper[4853]: I1209 18:13:17.945826 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2098ab3-3e17-48bb-af56-8043aefb9ae6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.177510 4853 generic.go:334] "Generic (PLEG): container finished" podID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerID="c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d" exitCode=0 Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.177579 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gsjdn" event={"ID":"b2098ab3-3e17-48bb-af56-8043aefb9ae6","Type":"ContainerDied","Data":"c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d"} Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.177652 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gsjdn" event={"ID":"b2098ab3-3e17-48bb-af56-8043aefb9ae6","Type":"ContainerDied","Data":"924984157d9495709f547d1ccc8fa9414c710db8cb722d3baeb72db1320a1771"} Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.177673 4853 scope.go:117] "RemoveContainer" containerID="c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.177579 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gsjdn" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.202799 4853 scope.go:117] "RemoveContainer" containerID="7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.232109 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gsjdn"] Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.239863 4853 scope.go:117] "RemoveContainer" containerID="6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.241367 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gsjdn"] Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.289845 4853 scope.go:117] "RemoveContainer" containerID="c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d" Dec 09 18:13:18 crc kubenswrapper[4853]: E1209 18:13:18.290305 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d\": container with ID starting with c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d not found: ID does not exist" containerID="c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.290348 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d"} err="failed to get container status \"c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d\": rpc error: code = NotFound desc = could not find container \"c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d\": container with ID starting with c33da0e5aee6e80e6371c4002cc92e3ff5a0226aa9eb41bdbe5755c9c4f29e0d not found: ID does not exist" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.290374 4853 scope.go:117] "RemoveContainer" containerID="7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109" Dec 09 18:13:18 crc kubenswrapper[4853]: E1209 18:13:18.290813 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109\": container with ID starting with 7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109 not found: ID does not exist" containerID="7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.290844 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109"} err="failed to get container status \"7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109\": rpc error: code = NotFound desc = could not find container \"7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109\": container with ID starting with 7840dce620cd9a0d8e0474a611ed02061e43a37718cc02b4770c03fb1534f109 not found: ID does not exist" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.290864 4853 scope.go:117] "RemoveContainer" containerID="6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e" Dec 09 18:13:18 crc kubenswrapper[4853]: E1209 18:13:18.291104 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e\": container with ID starting with 6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e not found: ID does not exist" containerID="6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e" Dec 09 18:13:18 crc kubenswrapper[4853]: I1209 18:13:18.291125 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e"} err="failed to get container status \"6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e\": rpc error: code = NotFound desc = could not find container \"6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e\": container with ID starting with 6c097cfdb26449df934c7c7880a4facf962f1320ab173382466bdf3a6935a74e not found: ID does not exist" Dec 09 18:13:18 crc kubenswrapper[4853]: E1209 18:13:18.568980 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:13:19 crc kubenswrapper[4853]: I1209 18:13:19.581503 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" path="/var/lib/kubelet/pods/b2098ab3-3e17-48bb-af56-8043aefb9ae6/volumes" Dec 09 18:13:26 crc kubenswrapper[4853]: I1209 18:13:26.567751 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:13:26 crc kubenswrapper[4853]: E1209 18:13:26.568781 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:13:29 crc kubenswrapper[4853]: E1209 18:13:29.570200 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:13:33 crc kubenswrapper[4853]: E1209 18:13:33.578480 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:13:39 crc kubenswrapper[4853]: I1209 18:13:39.568645 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:13:39 crc kubenswrapper[4853]: E1209 18:13:39.569544 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:13:41 crc kubenswrapper[4853]: E1209 18:13:41.570777 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:13:45 crc kubenswrapper[4853]: E1209 18:13:45.569550 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:13:51 crc kubenswrapper[4853]: I1209 18:13:51.567683 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:13:51 crc kubenswrapper[4853]: E1209 18:13:51.568357 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:13:52 crc kubenswrapper[4853]: E1209 18:13:52.571270 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:13:56 crc kubenswrapper[4853]: E1209 18:13:56.570284 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:14:02 crc kubenswrapper[4853]: I1209 18:14:02.567238 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:14:02 crc kubenswrapper[4853]: E1209 18:14:02.568179 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:14:07 crc kubenswrapper[4853]: E1209 18:14:07.570347 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:14:08 crc kubenswrapper[4853]: E1209 18:14:08.575133 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:14:14 crc kubenswrapper[4853]: I1209 18:14:14.890314 4853 generic.go:334] "Generic (PLEG): container finished" podID="4f764ae7-2150-4081-9763-a0ef9ce1640f" containerID="e18cd90c1066aedc9baa85b564bc41b094f1f1888201cec819b82f8ae046b5a4" exitCode=2 Dec 09 18:14:14 crc kubenswrapper[4853]: I1209 18:14:14.890356 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" event={"ID":"4f764ae7-2150-4081-9763-a0ef9ce1640f","Type":"ContainerDied","Data":"e18cd90c1066aedc9baa85b564bc41b094f1f1888201cec819b82f8ae046b5a4"} Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.398805 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.515982 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-inventory\") pod \"4f764ae7-2150-4081-9763-a0ef9ce1640f\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.516159 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-ssh-key\") pod \"4f764ae7-2150-4081-9763-a0ef9ce1640f\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.516308 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lddj9\" (UniqueName: \"kubernetes.io/projected/4f764ae7-2150-4081-9763-a0ef9ce1640f-kube-api-access-lddj9\") pod \"4f764ae7-2150-4081-9763-a0ef9ce1640f\" (UID: \"4f764ae7-2150-4081-9763-a0ef9ce1640f\") " Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.536717 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f764ae7-2150-4081-9763-a0ef9ce1640f-kube-api-access-lddj9" (OuterVolumeSpecName: "kube-api-access-lddj9") pod "4f764ae7-2150-4081-9763-a0ef9ce1640f" (UID: "4f764ae7-2150-4081-9763-a0ef9ce1640f"). InnerVolumeSpecName "kube-api-access-lddj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.548202 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4f764ae7-2150-4081-9763-a0ef9ce1640f" (UID: "4f764ae7-2150-4081-9763-a0ef9ce1640f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.556662 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-inventory" (OuterVolumeSpecName: "inventory") pod "4f764ae7-2150-4081-9763-a0ef9ce1640f" (UID: "4f764ae7-2150-4081-9763-a0ef9ce1640f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.630897 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lddj9\" (UniqueName: \"kubernetes.io/projected/4f764ae7-2150-4081-9763-a0ef9ce1640f-kube-api-access-lddj9\") on node \"crc\" DevicePath \"\"" Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.630941 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.630957 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f764ae7-2150-4081-9763-a0ef9ce1640f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.917737 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" event={"ID":"4f764ae7-2150-4081-9763-a0ef9ce1640f","Type":"ContainerDied","Data":"95dc41908d993d2377bbb15d5bf8d679b53fba2ade507158ffee06e2c60264f5"} Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.917772 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95dc41908d993d2377bbb15d5bf8d679b53fba2ade507158ffee06e2c60264f5" Dec 09 18:14:16 crc kubenswrapper[4853]: I1209 18:14:16.917823 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d9t66" Dec 09 18:14:17 crc kubenswrapper[4853]: I1209 18:14:17.569371 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:14:17 crc kubenswrapper[4853]: E1209 18:14:17.570173 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:14:19 crc kubenswrapper[4853]: E1209 18:14:19.569877 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:14:19 crc kubenswrapper[4853]: E1209 18:14:19.569944 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:14:30 crc kubenswrapper[4853]: I1209 18:14:30.567389 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:14:30 crc kubenswrapper[4853]: E1209 18:14:30.568317 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:14:32 crc kubenswrapper[4853]: E1209 18:14:32.569089 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:14:32 crc kubenswrapper[4853]: E1209 18:14:32.569509 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:14:42 crc kubenswrapper[4853]: I1209 18:14:42.567829 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:14:42 crc kubenswrapper[4853]: E1209 18:14:42.569055 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:14:45 crc kubenswrapper[4853]: E1209 18:14:45.570875 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:14:47 crc kubenswrapper[4853]: E1209 18:14:47.573955 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:14:57 crc kubenswrapper[4853]: I1209 18:14:57.567300 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:14:57 crc kubenswrapper[4853]: E1209 18:14:57.569224 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:14:58 crc kubenswrapper[4853]: E1209 18:14:58.569750 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:14:58 crc kubenswrapper[4853]: E1209 18:14:58.569773 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.194657 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws"] Dec 09 18:15:00 crc kubenswrapper[4853]: E1209 18:15:00.195709 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f764ae7-2150-4081-9763-a0ef9ce1640f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.195734 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f764ae7-2150-4081-9763-a0ef9ce1640f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 18:15:00 crc kubenswrapper[4853]: E1209 18:15:00.195768 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerName="extract-content" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.195781 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerName="extract-content" Dec 09 18:15:00 crc kubenswrapper[4853]: E1209 18:15:00.195813 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerName="extract-utilities" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.195825 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerName="extract-utilities" Dec 09 18:15:00 crc kubenswrapper[4853]: E1209 18:15:00.195860 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerName="registry-server" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.195872 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerName="registry-server" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.196288 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f764ae7-2150-4081-9763-a0ef9ce1640f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.196327 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2098ab3-3e17-48bb-af56-8043aefb9ae6" containerName="registry-server" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.197681 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.201298 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.201905 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.211028 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws"] Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.277824 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac57e20f-6f4d-4ed1-add9-151f0af8f000-secret-volume\") pod \"collect-profiles-29421735-j8rws\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.278096 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4v2c\" (UniqueName: \"kubernetes.io/projected/ac57e20f-6f4d-4ed1-add9-151f0af8f000-kube-api-access-x4v2c\") pod \"collect-profiles-29421735-j8rws\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.278414 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac57e20f-6f4d-4ed1-add9-151f0af8f000-config-volume\") pod \"collect-profiles-29421735-j8rws\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.381288 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac57e20f-6f4d-4ed1-add9-151f0af8f000-secret-volume\") pod \"collect-profiles-29421735-j8rws\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.381324 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4v2c\" (UniqueName: \"kubernetes.io/projected/ac57e20f-6f4d-4ed1-add9-151f0af8f000-kube-api-access-x4v2c\") pod \"collect-profiles-29421735-j8rws\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.381707 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac57e20f-6f4d-4ed1-add9-151f0af8f000-config-volume\") pod \"collect-profiles-29421735-j8rws\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.382413 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac57e20f-6f4d-4ed1-add9-151f0af8f000-config-volume\") pod \"collect-profiles-29421735-j8rws\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.390436 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac57e20f-6f4d-4ed1-add9-151f0af8f000-secret-volume\") pod \"collect-profiles-29421735-j8rws\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.401394 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4v2c\" (UniqueName: \"kubernetes.io/projected/ac57e20f-6f4d-4ed1-add9-151f0af8f000-kube-api-access-x4v2c\") pod \"collect-profiles-29421735-j8rws\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:00 crc kubenswrapper[4853]: I1209 18:15:00.536183 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:01 crc kubenswrapper[4853]: I1209 18:15:01.051225 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws"] Dec 09 18:15:01 crc kubenswrapper[4853]: I1209 18:15:01.628515 4853 generic.go:334] "Generic (PLEG): container finished" podID="ac57e20f-6f4d-4ed1-add9-151f0af8f000" containerID="d15d884cc350ba27e49eae94b9e03abd79c7dbeed5c8fc5b8ef3e920c79b4c0b" exitCode=0 Dec 09 18:15:01 crc kubenswrapper[4853]: I1209 18:15:01.628614 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" event={"ID":"ac57e20f-6f4d-4ed1-add9-151f0af8f000","Type":"ContainerDied","Data":"d15d884cc350ba27e49eae94b9e03abd79c7dbeed5c8fc5b8ef3e920c79b4c0b"} Dec 09 18:15:01 crc kubenswrapper[4853]: I1209 18:15:01.628869 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" event={"ID":"ac57e20f-6f4d-4ed1-add9-151f0af8f000","Type":"ContainerStarted","Data":"80912f6e7ce0a809f08dd39c552ff3afe70751afae20c35aeb81653dd77e2e78"} Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.051579 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.252883 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac57e20f-6f4d-4ed1-add9-151f0af8f000-config-volume\") pod \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.253079 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4v2c\" (UniqueName: \"kubernetes.io/projected/ac57e20f-6f4d-4ed1-add9-151f0af8f000-kube-api-access-x4v2c\") pod \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.253708 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac57e20f-6f4d-4ed1-add9-151f0af8f000-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac57e20f-6f4d-4ed1-add9-151f0af8f000" (UID: "ac57e20f-6f4d-4ed1-add9-151f0af8f000"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.254191 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac57e20f-6f4d-4ed1-add9-151f0af8f000-secret-volume\") pod \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\" (UID: \"ac57e20f-6f4d-4ed1-add9-151f0af8f000\") " Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.254589 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac57e20f-6f4d-4ed1-add9-151f0af8f000-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.273793 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac57e20f-6f4d-4ed1-add9-151f0af8f000-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac57e20f-6f4d-4ed1-add9-151f0af8f000" (UID: "ac57e20f-6f4d-4ed1-add9-151f0af8f000"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.273979 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac57e20f-6f4d-4ed1-add9-151f0af8f000-kube-api-access-x4v2c" (OuterVolumeSpecName: "kube-api-access-x4v2c") pod "ac57e20f-6f4d-4ed1-add9-151f0af8f000" (UID: "ac57e20f-6f4d-4ed1-add9-151f0af8f000"). InnerVolumeSpecName "kube-api-access-x4v2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.357212 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac57e20f-6f4d-4ed1-add9-151f0af8f000-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.357241 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4v2c\" (UniqueName: \"kubernetes.io/projected/ac57e20f-6f4d-4ed1-add9-151f0af8f000-kube-api-access-x4v2c\") on node \"crc\" DevicePath \"\"" Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.647296 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" event={"ID":"ac57e20f-6f4d-4ed1-add9-151f0af8f000","Type":"ContainerDied","Data":"80912f6e7ce0a809f08dd39c552ff3afe70751afae20c35aeb81653dd77e2e78"} Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.647729 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80912f6e7ce0a809f08dd39c552ff3afe70751afae20c35aeb81653dd77e2e78" Dec 09 18:15:03 crc kubenswrapper[4853]: I1209 18:15:03.647366 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421735-j8rws" Dec 09 18:15:04 crc kubenswrapper[4853]: I1209 18:15:04.157873 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682"] Dec 09 18:15:04 crc kubenswrapper[4853]: I1209 18:15:04.172583 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421690-xb682"] Dec 09 18:15:05 crc kubenswrapper[4853]: I1209 18:15:05.581138 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e" path="/var/lib/kubelet/pods/c049bac8-63ef-4c10-a7b8-a2c79dbb2f4e/volumes" Dec 09 18:15:10 crc kubenswrapper[4853]: I1209 18:15:10.567728 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:15:11 crc kubenswrapper[4853]: E1209 18:15:11.569902 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:15:11 crc kubenswrapper[4853]: I1209 18:15:11.734719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"660815e3b2f7a8b1e4228b3fd96b3a3dd8fc697f2d8728d1271e220c8f0b6aac"} Dec 09 18:15:13 crc kubenswrapper[4853]: E1209 18:15:13.578277 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:15:24 crc kubenswrapper[4853]: E1209 18:15:24.570266 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:15:24 crc kubenswrapper[4853]: E1209 18:15:24.570278 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:15:31 crc kubenswrapper[4853]: I1209 18:15:31.179162 4853 scope.go:117] "RemoveContainer" containerID="7d348ac22ade259af3a232af1ad391e3dba535536ccfe024bd03c9b5ee7b2195" Dec 09 18:15:36 crc kubenswrapper[4853]: E1209 18:15:36.572147 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:15:37 crc kubenswrapper[4853]: E1209 18:15:37.569399 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:15:47 crc kubenswrapper[4853]: E1209 18:15:47.573856 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:15:52 crc kubenswrapper[4853]: E1209 18:15:52.569905 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:15:58 crc kubenswrapper[4853]: E1209 18:15:58.579289 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:16:04 crc kubenswrapper[4853]: E1209 18:16:04.570268 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:16:11 crc kubenswrapper[4853]: E1209 18:16:11.571176 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:16:17 crc kubenswrapper[4853]: E1209 18:16:17.570357 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:16:26 crc kubenswrapper[4853]: E1209 18:16:26.569694 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:16:29 crc kubenswrapper[4853]: E1209 18:16:29.571053 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:16:31 crc kubenswrapper[4853]: I1209 18:16:31.247218 4853 scope.go:117] "RemoveContainer" containerID="f471357728b609ec2b3f80ca69600d86b1a8110db96afc11bcb7070bcbc96125" Dec 09 18:16:31 crc kubenswrapper[4853]: I1209 18:16:31.276037 4853 scope.go:117] "RemoveContainer" containerID="1bf854849132462016030fae943bd7e50106895220ab8bf0bfc789a413472345" Dec 09 18:16:31 crc kubenswrapper[4853]: I1209 18:16:31.366024 4853 scope.go:117] "RemoveContainer" containerID="a5a897f3b6835442ce49e980330f47edf7a0e9a9a986dba1e848dbab0a9d21d2" Dec 09 18:16:37 crc kubenswrapper[4853]: E1209 18:16:37.574112 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:16:42 crc kubenswrapper[4853]: E1209 18:16:42.571019 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:16:49 crc kubenswrapper[4853]: I1209 18:16:49.572664 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 18:16:49 crc kubenswrapper[4853]: E1209 18:16:49.664872 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:16:49 crc kubenswrapper[4853]: E1209 18:16:49.664938 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:16:49 crc kubenswrapper[4853]: E1209 18:16:49.665076 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:16:49 crc kubenswrapper[4853]: E1209 18:16:49.666449 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:16:53 crc kubenswrapper[4853]: E1209 18:16:53.677714 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:16:53 crc kubenswrapper[4853]: E1209 18:16:53.678515 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:16:53 crc kubenswrapper[4853]: E1209 18:16:53.678760 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:16:53 crc kubenswrapper[4853]: E1209 18:16:53.679948 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:17:04 crc kubenswrapper[4853]: E1209 18:17:04.569095 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:17:08 crc kubenswrapper[4853]: E1209 18:17:08.572263 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:17:19 crc kubenswrapper[4853]: E1209 18:17:19.571622 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:17:21 crc kubenswrapper[4853]: E1209 18:17:21.570651 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:17:28 crc kubenswrapper[4853]: I1209 18:17:28.593422 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:17:28 crc kubenswrapper[4853]: I1209 18:17:28.594197 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:17:34 crc kubenswrapper[4853]: E1209 18:17:34.571398 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:17:34 crc kubenswrapper[4853]: E1209 18:17:34.571655 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:17:47 crc kubenswrapper[4853]: E1209 18:17:47.569547 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:17:49 crc kubenswrapper[4853]: E1209 18:17:49.570111 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:17:58 crc kubenswrapper[4853]: I1209 18:17:58.592973 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:17:58 crc kubenswrapper[4853]: I1209 18:17:58.593578 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:18:00 crc kubenswrapper[4853]: E1209 18:18:00.570481 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:18:03 crc kubenswrapper[4853]: E1209 18:18:03.576342 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:18:14 crc kubenswrapper[4853]: E1209 18:18:14.570272 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:18:15 crc kubenswrapper[4853]: E1209 18:18:15.569438 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:18:25 crc kubenswrapper[4853]: E1209 18:18:25.569730 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:18:28 crc kubenswrapper[4853]: I1209 18:18:28.593167 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:18:28 crc kubenswrapper[4853]: I1209 18:18:28.594270 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:18:28 crc kubenswrapper[4853]: I1209 18:18:28.594332 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 18:18:28 crc kubenswrapper[4853]: I1209 18:18:28.595367 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"660815e3b2f7a8b1e4228b3fd96b3a3dd8fc697f2d8728d1271e220c8f0b6aac"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 18:18:28 crc kubenswrapper[4853]: I1209 18:18:28.595455 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://660815e3b2f7a8b1e4228b3fd96b3a3dd8fc697f2d8728d1271e220c8f0b6aac" gracePeriod=600 Dec 09 18:18:29 crc kubenswrapper[4853]: I1209 18:18:29.209278 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="660815e3b2f7a8b1e4228b3fd96b3a3dd8fc697f2d8728d1271e220c8f0b6aac" exitCode=0 Dec 09 18:18:29 crc kubenswrapper[4853]: I1209 18:18:29.209324 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"660815e3b2f7a8b1e4228b3fd96b3a3dd8fc697f2d8728d1271e220c8f0b6aac"} Dec 09 18:18:29 crc kubenswrapper[4853]: I1209 18:18:29.209729 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404"} Dec 09 18:18:29 crc kubenswrapper[4853]: I1209 18:18:29.209758 4853 scope.go:117] "RemoveContainer" containerID="7cb1bd200e4e6aad7a6c58831b8c2ecfcf7ce28f34178cb04077961b44601ef6" Dec 09 18:18:30 crc kubenswrapper[4853]: E1209 18:18:30.569167 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:18:36 crc kubenswrapper[4853]: E1209 18:18:36.569062 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:18:41 crc kubenswrapper[4853]: E1209 18:18:41.571374 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:18:49 crc kubenswrapper[4853]: E1209 18:18:49.569911 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:18:55 crc kubenswrapper[4853]: E1209 18:18:55.569708 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:19:01 crc kubenswrapper[4853]: E1209 18:19:01.572335 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:19:07 crc kubenswrapper[4853]: E1209 18:19:07.570695 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:19:12 crc kubenswrapper[4853]: E1209 18:19:12.569933 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:19:19 crc kubenswrapper[4853]: E1209 18:19:19.569184 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:19:24 crc kubenswrapper[4853]: E1209 18:19:24.569793 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.036979 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8"] Dec 09 18:19:33 crc kubenswrapper[4853]: E1209 18:19:33.038108 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac57e20f-6f4d-4ed1-add9-151f0af8f000" containerName="collect-profiles" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.038127 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac57e20f-6f4d-4ed1-add9-151f0af8f000" containerName="collect-profiles" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.038445 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac57e20f-6f4d-4ed1-add9-151f0af8f000" containerName="collect-profiles" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.039590 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.041573 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.041884 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.042192 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l9kqf" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.042231 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.055830 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8"] Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.105215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.105399 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7fn\" (UniqueName: \"kubernetes.io/projected/dbac8a22-f72a-4467-ae1f-1d93430b4049-kube-api-access-8x7fn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.105578 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.208822 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.208900 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7fn\" (UniqueName: \"kubernetes.io/projected/dbac8a22-f72a-4467-ae1f-1d93430b4049-kube-api-access-8x7fn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.208962 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.216780 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.217350 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.228067 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7fn\" (UniqueName: \"kubernetes.io/projected/dbac8a22-f72a-4467-ae1f-1d93430b4049-kube-api-access-8x7fn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: I1209 18:19:33.385952 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:19:33 crc kubenswrapper[4853]: E1209 18:19:33.579845 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:19:34 crc kubenswrapper[4853]: I1209 18:19:34.016156 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8"] Dec 09 18:19:34 crc kubenswrapper[4853]: I1209 18:19:34.997795 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" event={"ID":"dbac8a22-f72a-4467-ae1f-1d93430b4049","Type":"ContainerStarted","Data":"8acf619d8a9d0a06f207971590c0ed3fddeeb7f52b0f1f86fcba3604d7b26a91"} Dec 09 18:19:36 crc kubenswrapper[4853]: I1209 18:19:36.009025 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" event={"ID":"dbac8a22-f72a-4467-ae1f-1d93430b4049","Type":"ContainerStarted","Data":"9bce7ee87a10f5ace6884ee1564735ee157803b0893231da4e127ebad8efbc66"} Dec 09 18:19:36 crc kubenswrapper[4853]: I1209 18:19:36.033544 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" podStartSLOduration=2.14907008 podStartE2EDuration="3.033527123s" podCreationTimestamp="2025-12-09 18:19:33 +0000 UTC" firstStartedPulling="2025-12-09 18:19:34.014584356 +0000 UTC m=+5000.949323558" lastFinishedPulling="2025-12-09 18:19:34.899041409 +0000 UTC m=+5001.833780601" observedRunningTime="2025-12-09 18:19:36.026838281 +0000 UTC m=+5002.961577463" watchObservedRunningTime="2025-12-09 18:19:36.033527123 +0000 UTC m=+5002.968266305" Dec 09 18:19:37 crc kubenswrapper[4853]: E1209 18:19:37.570624 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:19:48 crc kubenswrapper[4853]: E1209 18:19:48.569023 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:19:52 crc kubenswrapper[4853]: E1209 18:19:52.569799 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:20:02 crc kubenswrapper[4853]: E1209 18:20:02.572306 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:20:03 crc kubenswrapper[4853]: E1209 18:20:03.583061 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:20:15 crc kubenswrapper[4853]: E1209 18:20:15.569143 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:20:16 crc kubenswrapper[4853]: E1209 18:20:16.570931 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:20:26 crc kubenswrapper[4853]: E1209 18:20:26.570563 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:20:28 crc kubenswrapper[4853]: I1209 18:20:28.592987 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:20:28 crc kubenswrapper[4853]: I1209 18:20:28.593543 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:20:31 crc kubenswrapper[4853]: E1209 18:20:31.569995 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.197792 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mx2kk"] Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.204261 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.234422 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mx2kk"] Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.246177 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-catalog-content\") pod \"community-operators-mx2kk\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.246540 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-utilities\") pod \"community-operators-mx2kk\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.246675 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t68p4\" (UniqueName: \"kubernetes.io/projected/059c34e2-f16c-4e5e-9d4f-1935074ac186-kube-api-access-t68p4\") pod \"community-operators-mx2kk\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.347911 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-utilities\") pod \"community-operators-mx2kk\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.347979 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t68p4\" (UniqueName: \"kubernetes.io/projected/059c34e2-f16c-4e5e-9d4f-1935074ac186-kube-api-access-t68p4\") pod \"community-operators-mx2kk\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.348136 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-catalog-content\") pod \"community-operators-mx2kk\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.348824 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-catalog-content\") pod \"community-operators-mx2kk\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.348964 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-utilities\") pod \"community-operators-mx2kk\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.374394 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t68p4\" (UniqueName: \"kubernetes.io/projected/059c34e2-f16c-4e5e-9d4f-1935074ac186-kube-api-access-t68p4\") pod \"community-operators-mx2kk\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:35 crc kubenswrapper[4853]: I1209 18:20:35.549348 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:36 crc kubenswrapper[4853]: I1209 18:20:36.153371 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mx2kk"] Dec 09 18:20:36 crc kubenswrapper[4853]: W1209 18:20:36.156740 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod059c34e2_f16c_4e5e_9d4f_1935074ac186.slice/crio-7026cfae8ca023df4384acb2e86039de6145f318efa7c363ca22c86fd5f8d094 WatchSource:0}: Error finding container 7026cfae8ca023df4384acb2e86039de6145f318efa7c363ca22c86fd5f8d094: Status 404 returned error can't find the container with id 7026cfae8ca023df4384acb2e86039de6145f318efa7c363ca22c86fd5f8d094 Dec 09 18:20:36 crc kubenswrapper[4853]: I1209 18:20:36.749994 4853 generic.go:334] "Generic (PLEG): container finished" podID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerID="5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d" exitCode=0 Dec 09 18:20:36 crc kubenswrapper[4853]: I1209 18:20:36.750063 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx2kk" event={"ID":"059c34e2-f16c-4e5e-9d4f-1935074ac186","Type":"ContainerDied","Data":"5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d"} Dec 09 18:20:36 crc kubenswrapper[4853]: I1209 18:20:36.750285 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx2kk" event={"ID":"059c34e2-f16c-4e5e-9d4f-1935074ac186","Type":"ContainerStarted","Data":"7026cfae8ca023df4384acb2e86039de6145f318efa7c363ca22c86fd5f8d094"} Dec 09 18:20:37 crc kubenswrapper[4853]: E1209 18:20:37.569397 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:20:37 crc kubenswrapper[4853]: I1209 18:20:37.764767 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx2kk" event={"ID":"059c34e2-f16c-4e5e-9d4f-1935074ac186","Type":"ContainerStarted","Data":"70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01"} Dec 09 18:20:38 crc kubenswrapper[4853]: I1209 18:20:38.777296 4853 generic.go:334] "Generic (PLEG): container finished" podID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerID="70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01" exitCode=0 Dec 09 18:20:38 crc kubenswrapper[4853]: I1209 18:20:38.777465 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx2kk" event={"ID":"059c34e2-f16c-4e5e-9d4f-1935074ac186","Type":"ContainerDied","Data":"70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01"} Dec 09 18:20:40 crc kubenswrapper[4853]: I1209 18:20:40.800803 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx2kk" event={"ID":"059c34e2-f16c-4e5e-9d4f-1935074ac186","Type":"ContainerStarted","Data":"1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42"} Dec 09 18:20:40 crc kubenswrapper[4853]: I1209 18:20:40.825369 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mx2kk" podStartSLOduration=3.060802451 podStartE2EDuration="5.825346908s" podCreationTimestamp="2025-12-09 18:20:35 +0000 UTC" firstStartedPulling="2025-12-09 18:20:36.753619129 +0000 UTC m=+5063.688358311" lastFinishedPulling="2025-12-09 18:20:39.518163586 +0000 UTC m=+5066.452902768" observedRunningTime="2025-12-09 18:20:40.817218766 +0000 UTC m=+5067.751957958" watchObservedRunningTime="2025-12-09 18:20:40.825346908 +0000 UTC m=+5067.760086100" Dec 09 18:20:44 crc kubenswrapper[4853]: E1209 18:20:44.570062 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:20:45 crc kubenswrapper[4853]: I1209 18:20:45.550028 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:45 crc kubenswrapper[4853]: I1209 18:20:45.550099 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:45 crc kubenswrapper[4853]: I1209 18:20:45.602674 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:45 crc kubenswrapper[4853]: I1209 18:20:45.913863 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:45 crc kubenswrapper[4853]: I1209 18:20:45.967050 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mx2kk"] Dec 09 18:20:47 crc kubenswrapper[4853]: I1209 18:20:47.884897 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mx2kk" podUID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerName="registry-server" containerID="cri-o://1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42" gracePeriod=2 Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.384527 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.498700 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t68p4\" (UniqueName: \"kubernetes.io/projected/059c34e2-f16c-4e5e-9d4f-1935074ac186-kube-api-access-t68p4\") pod \"059c34e2-f16c-4e5e-9d4f-1935074ac186\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.498767 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-utilities\") pod \"059c34e2-f16c-4e5e-9d4f-1935074ac186\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.498808 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-catalog-content\") pod \"059c34e2-f16c-4e5e-9d4f-1935074ac186\" (UID: \"059c34e2-f16c-4e5e-9d4f-1935074ac186\") " Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.500939 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-utilities" (OuterVolumeSpecName: "utilities") pod "059c34e2-f16c-4e5e-9d4f-1935074ac186" (UID: "059c34e2-f16c-4e5e-9d4f-1935074ac186"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.504147 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/059c34e2-f16c-4e5e-9d4f-1935074ac186-kube-api-access-t68p4" (OuterVolumeSpecName: "kube-api-access-t68p4") pod "059c34e2-f16c-4e5e-9d4f-1935074ac186" (UID: "059c34e2-f16c-4e5e-9d4f-1935074ac186"). InnerVolumeSpecName "kube-api-access-t68p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.556044 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "059c34e2-f16c-4e5e-9d4f-1935074ac186" (UID: "059c34e2-f16c-4e5e-9d4f-1935074ac186"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.602019 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.602052 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t68p4\" (UniqueName: \"kubernetes.io/projected/059c34e2-f16c-4e5e-9d4f-1935074ac186-kube-api-access-t68p4\") on node \"crc\" DevicePath \"\"" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.602061 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/059c34e2-f16c-4e5e-9d4f-1935074ac186-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.905280 4853 generic.go:334] "Generic (PLEG): container finished" podID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerID="1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42" exitCode=0 Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.905319 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx2kk" event={"ID":"059c34e2-f16c-4e5e-9d4f-1935074ac186","Type":"ContainerDied","Data":"1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42"} Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.905343 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mx2kk" event={"ID":"059c34e2-f16c-4e5e-9d4f-1935074ac186","Type":"ContainerDied","Data":"7026cfae8ca023df4384acb2e86039de6145f318efa7c363ca22c86fd5f8d094"} Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.905354 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mx2kk" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.905358 4853 scope.go:117] "RemoveContainer" containerID="1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.942874 4853 scope.go:117] "RemoveContainer" containerID="70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01" Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.947479 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mx2kk"] Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.967458 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mx2kk"] Dec 09 18:20:48 crc kubenswrapper[4853]: I1209 18:20:48.975202 4853 scope.go:117] "RemoveContainer" containerID="5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d" Dec 09 18:20:49 crc kubenswrapper[4853]: I1209 18:20:49.028872 4853 scope.go:117] "RemoveContainer" containerID="1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42" Dec 09 18:20:49 crc kubenswrapper[4853]: E1209 18:20:49.029298 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42\": container with ID starting with 1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42 not found: ID does not exist" containerID="1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42" Dec 09 18:20:49 crc kubenswrapper[4853]: I1209 18:20:49.029338 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42"} err="failed to get container status \"1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42\": rpc error: code = NotFound desc = could not find container \"1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42\": container with ID starting with 1e76033d5fb0ffa2bc5d16576c9d1965df8c389a032b74386a3516cb9b339a42 not found: ID does not exist" Dec 09 18:20:49 crc kubenswrapper[4853]: I1209 18:20:49.029365 4853 scope.go:117] "RemoveContainer" containerID="70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01" Dec 09 18:20:49 crc kubenswrapper[4853]: E1209 18:20:49.029775 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01\": container with ID starting with 70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01 not found: ID does not exist" containerID="70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01" Dec 09 18:20:49 crc kubenswrapper[4853]: I1209 18:20:49.029798 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01"} err="failed to get container status \"70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01\": rpc error: code = NotFound desc = could not find container \"70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01\": container with ID starting with 70549b6c001e852f2895cb57ef0d031e17a3c9376eb5f6abbde50ab8641e1e01 not found: ID does not exist" Dec 09 18:20:49 crc kubenswrapper[4853]: I1209 18:20:49.029814 4853 scope.go:117] "RemoveContainer" containerID="5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d" Dec 09 18:20:49 crc kubenswrapper[4853]: E1209 18:20:49.030157 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d\": container with ID starting with 5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d not found: ID does not exist" containerID="5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d" Dec 09 18:20:49 crc kubenswrapper[4853]: I1209 18:20:49.030203 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d"} err="failed to get container status \"5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d\": rpc error: code = NotFound desc = could not find container \"5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d\": container with ID starting with 5ef9afed734da3780a72ea425fe97374e3b402b9a1e839287aea5183cf162d4d not found: ID does not exist" Dec 09 18:20:49 crc kubenswrapper[4853]: I1209 18:20:49.584188 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="059c34e2-f16c-4e5e-9d4f-1935074ac186" path="/var/lib/kubelet/pods/059c34e2-f16c-4e5e-9d4f-1935074ac186/volumes" Dec 09 18:20:52 crc kubenswrapper[4853]: E1209 18:20:52.571188 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:20:58 crc kubenswrapper[4853]: I1209 18:20:58.592949 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:20:58 crc kubenswrapper[4853]: I1209 18:20:58.594700 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:20:59 crc kubenswrapper[4853]: E1209 18:20:59.569073 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:21:05 crc kubenswrapper[4853]: E1209 18:21:05.569938 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:21:12 crc kubenswrapper[4853]: E1209 18:21:12.569813 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:21:17 crc kubenswrapper[4853]: E1209 18:21:17.573816 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:21:25 crc kubenswrapper[4853]: E1209 18:21:25.571446 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:21:28 crc kubenswrapper[4853]: I1209 18:21:28.594249 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:21:28 crc kubenswrapper[4853]: I1209 18:21:28.594656 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:21:28 crc kubenswrapper[4853]: I1209 18:21:28.594710 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 18:21:28 crc kubenswrapper[4853]: I1209 18:21:28.595678 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 18:21:28 crc kubenswrapper[4853]: I1209 18:21:28.595723 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" gracePeriod=600 Dec 09 18:21:28 crc kubenswrapper[4853]: E1209 18:21:28.732021 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:21:29 crc kubenswrapper[4853]: I1209 18:21:29.347557 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" exitCode=0 Dec 09 18:21:29 crc kubenswrapper[4853]: I1209 18:21:29.347687 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404"} Dec 09 18:21:29 crc kubenswrapper[4853]: I1209 18:21:29.347955 4853 scope.go:117] "RemoveContainer" containerID="660815e3b2f7a8b1e4228b3fd96b3a3dd8fc697f2d8728d1271e220c8f0b6aac" Dec 09 18:21:29 crc kubenswrapper[4853]: I1209 18:21:29.348727 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:21:29 crc kubenswrapper[4853]: E1209 18:21:29.349072 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:21:32 crc kubenswrapper[4853]: E1209 18:21:32.569246 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.344811 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-svtkm"] Dec 09 18:21:39 crc kubenswrapper[4853]: E1209 18:21:39.347276 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerName="extract-utilities" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.347763 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerName="extract-utilities" Dec 09 18:21:39 crc kubenswrapper[4853]: E1209 18:21:39.347837 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerName="extract-content" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.347857 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerName="extract-content" Dec 09 18:21:39 crc kubenswrapper[4853]: E1209 18:21:39.347897 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerName="registry-server" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.347907 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerName="registry-server" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.348312 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="059c34e2-f16c-4e5e-9d4f-1935074ac186" containerName="registry-server" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.352552 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.366051 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svtkm"] Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.458096 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-utilities\") pod \"redhat-marketplace-svtkm\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.458183 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgrwj\" (UniqueName: \"kubernetes.io/projected/4405dedc-0001-49ab-afdc-29cb16bded14-kube-api-access-kgrwj\") pod \"redhat-marketplace-svtkm\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.458206 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-catalog-content\") pod \"redhat-marketplace-svtkm\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.560524 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgrwj\" (UniqueName: \"kubernetes.io/projected/4405dedc-0001-49ab-afdc-29cb16bded14-kube-api-access-kgrwj\") pod \"redhat-marketplace-svtkm\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.560579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-catalog-content\") pod \"redhat-marketplace-svtkm\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.560804 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-utilities\") pod \"redhat-marketplace-svtkm\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.561288 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-utilities\") pod \"redhat-marketplace-svtkm\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.561298 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-catalog-content\") pod \"redhat-marketplace-svtkm\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: E1209 18:21:39.572748 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.592968 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgrwj\" (UniqueName: \"kubernetes.io/projected/4405dedc-0001-49ab-afdc-29cb16bded14-kube-api-access-kgrwj\") pod \"redhat-marketplace-svtkm\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:39 crc kubenswrapper[4853]: I1209 18:21:39.724462 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:40 crc kubenswrapper[4853]: E1209 18:21:40.156055 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Dec 09 18:21:40 crc kubenswrapper[4853]: I1209 18:21:40.242020 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svtkm"] Dec 09 18:21:40 crc kubenswrapper[4853]: I1209 18:21:40.517855 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svtkm" event={"ID":"4405dedc-0001-49ab-afdc-29cb16bded14","Type":"ContainerStarted","Data":"8e90e1cb9bebdd92cf4cf7728ae702bff4d55e0c43c0152a65902197aecdbeab"} Dec 09 18:21:40 crc kubenswrapper[4853]: I1209 18:21:40.518127 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svtkm" event={"ID":"4405dedc-0001-49ab-afdc-29cb16bded14","Type":"ContainerStarted","Data":"19a51a9115ed64c5436db3f5a76ffdc21f1b9102565a1eeb9d2befe9455cf021"} Dec 09 18:21:41 crc kubenswrapper[4853]: I1209 18:21:41.530860 4853 generic.go:334] "Generic (PLEG): container finished" podID="4405dedc-0001-49ab-afdc-29cb16bded14" containerID="8e90e1cb9bebdd92cf4cf7728ae702bff4d55e0c43c0152a65902197aecdbeab" exitCode=0 Dec 09 18:21:41 crc kubenswrapper[4853]: I1209 18:21:41.531113 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svtkm" event={"ID":"4405dedc-0001-49ab-afdc-29cb16bded14","Type":"ContainerDied","Data":"8e90e1cb9bebdd92cf4cf7728ae702bff4d55e0c43c0152a65902197aecdbeab"} Dec 09 18:21:43 crc kubenswrapper[4853]: I1209 18:21:43.573580 4853 generic.go:334] "Generic (PLEG): container finished" podID="4405dedc-0001-49ab-afdc-29cb16bded14" containerID="3b122c1aceabdae1e30f289db5e95564d0536a4a8dd2fb2e8c1c0c7c1f35aca6" exitCode=0 Dec 09 18:21:43 crc kubenswrapper[4853]: I1209 18:21:43.588107 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svtkm" event={"ID":"4405dedc-0001-49ab-afdc-29cb16bded14","Type":"ContainerDied","Data":"3b122c1aceabdae1e30f289db5e95564d0536a4a8dd2fb2e8c1c0c7c1f35aca6"} Dec 09 18:21:44 crc kubenswrapper[4853]: I1209 18:21:44.586990 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:21:44 crc kubenswrapper[4853]: E1209 18:21:44.587394 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:21:44 crc kubenswrapper[4853]: E1209 18:21:44.592103 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:21:45 crc kubenswrapper[4853]: I1209 18:21:45.618692 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svtkm" event={"ID":"4405dedc-0001-49ab-afdc-29cb16bded14","Type":"ContainerStarted","Data":"8adb52b435005af7bda82e5a67bf8d373d29b2204d5b245c221ae6dcd492617a"} Dec 09 18:21:45 crc kubenswrapper[4853]: I1209 18:21:45.642189 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-svtkm" podStartSLOduration=3.130677794 podStartE2EDuration="6.642164707s" podCreationTimestamp="2025-12-09 18:21:39 +0000 UTC" firstStartedPulling="2025-12-09 18:21:40.519880827 +0000 UTC m=+5127.454619999" lastFinishedPulling="2025-12-09 18:21:44.03136773 +0000 UTC m=+5130.966106912" observedRunningTime="2025-12-09 18:21:45.638424285 +0000 UTC m=+5132.573163467" watchObservedRunningTime="2025-12-09 18:21:45.642164707 +0000 UTC m=+5132.576903889" Dec 09 18:21:49 crc kubenswrapper[4853]: I1209 18:21:49.724911 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:49 crc kubenswrapper[4853]: I1209 18:21:49.725614 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:49 crc kubenswrapper[4853]: I1209 18:21:49.797355 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:50 crc kubenswrapper[4853]: I1209 18:21:50.740552 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:50 crc kubenswrapper[4853]: I1209 18:21:50.790454 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svtkm"] Dec 09 18:21:51 crc kubenswrapper[4853]: I1209 18:21:51.570686 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 18:21:51 crc kubenswrapper[4853]: E1209 18:21:51.708138 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:21:51 crc kubenswrapper[4853]: E1209 18:21:51.708459 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:21:51 crc kubenswrapper[4853]: E1209 18:21:51.708623 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:21:51 crc kubenswrapper[4853]: E1209 18:21:51.709830 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:21:52 crc kubenswrapper[4853]: I1209 18:21:52.698951 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-svtkm" podUID="4405dedc-0001-49ab-afdc-29cb16bded14" containerName="registry-server" containerID="cri-o://8adb52b435005af7bda82e5a67bf8d373d29b2204d5b245c221ae6dcd492617a" gracePeriod=2 Dec 09 18:21:53 crc kubenswrapper[4853]: I1209 18:21:53.712785 4853 generic.go:334] "Generic (PLEG): container finished" podID="4405dedc-0001-49ab-afdc-29cb16bded14" containerID="8adb52b435005af7bda82e5a67bf8d373d29b2204d5b245c221ae6dcd492617a" exitCode=0 Dec 09 18:21:53 crc kubenswrapper[4853]: I1209 18:21:53.713415 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svtkm" event={"ID":"4405dedc-0001-49ab-afdc-29cb16bded14","Type":"ContainerDied","Data":"8adb52b435005af7bda82e5a67bf8d373d29b2204d5b245c221ae6dcd492617a"} Dec 09 18:21:53 crc kubenswrapper[4853]: I1209 18:21:53.842828 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:53 crc kubenswrapper[4853]: I1209 18:21:53.995843 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgrwj\" (UniqueName: \"kubernetes.io/projected/4405dedc-0001-49ab-afdc-29cb16bded14-kube-api-access-kgrwj\") pod \"4405dedc-0001-49ab-afdc-29cb16bded14\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " Dec 09 18:21:53 crc kubenswrapper[4853]: I1209 18:21:53.996089 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-utilities\") pod \"4405dedc-0001-49ab-afdc-29cb16bded14\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " Dec 09 18:21:53 crc kubenswrapper[4853]: I1209 18:21:53.996124 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-catalog-content\") pod \"4405dedc-0001-49ab-afdc-29cb16bded14\" (UID: \"4405dedc-0001-49ab-afdc-29cb16bded14\") " Dec 09 18:21:53 crc kubenswrapper[4853]: I1209 18:21:53.997403 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-utilities" (OuterVolumeSpecName: "utilities") pod "4405dedc-0001-49ab-afdc-29cb16bded14" (UID: "4405dedc-0001-49ab-afdc-29cb16bded14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.002825 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4405dedc-0001-49ab-afdc-29cb16bded14-kube-api-access-kgrwj" (OuterVolumeSpecName: "kube-api-access-kgrwj") pod "4405dedc-0001-49ab-afdc-29cb16bded14" (UID: "4405dedc-0001-49ab-afdc-29cb16bded14"). InnerVolumeSpecName "kube-api-access-kgrwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.021845 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4405dedc-0001-49ab-afdc-29cb16bded14" (UID: "4405dedc-0001-49ab-afdc-29cb16bded14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.099520 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgrwj\" (UniqueName: \"kubernetes.io/projected/4405dedc-0001-49ab-afdc-29cb16bded14-kube-api-access-kgrwj\") on node \"crc\" DevicePath \"\"" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.099769 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.099843 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4405dedc-0001-49ab-afdc-29cb16bded14-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.727322 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svtkm" event={"ID":"4405dedc-0001-49ab-afdc-29cb16bded14","Type":"ContainerDied","Data":"19a51a9115ed64c5436db3f5a76ffdc21f1b9102565a1eeb9d2befe9455cf021"} Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.727579 4853 scope.go:117] "RemoveContainer" containerID="8adb52b435005af7bda82e5a67bf8d373d29b2204d5b245c221ae6dcd492617a" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.727539 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svtkm" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.755908 4853 scope.go:117] "RemoveContainer" containerID="3b122c1aceabdae1e30f289db5e95564d0536a4a8dd2fb2e8c1c0c7c1f35aca6" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.786461 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svtkm"] Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.791849 4853 scope.go:117] "RemoveContainer" containerID="8e90e1cb9bebdd92cf4cf7728ae702bff4d55e0c43c0152a65902197aecdbeab" Dec 09 18:21:54 crc kubenswrapper[4853]: I1209 18:21:54.797298 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-svtkm"] Dec 09 18:21:55 crc kubenswrapper[4853]: I1209 18:21:55.583140 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4405dedc-0001-49ab-afdc-29cb16bded14" path="/var/lib/kubelet/pods/4405dedc-0001-49ab-afdc-29cb16bded14/volumes" Dec 09 18:21:57 crc kubenswrapper[4853]: E1209 18:21:57.681777 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:21:57 crc kubenswrapper[4853]: E1209 18:21:57.682332 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:21:57 crc kubenswrapper[4853]: E1209 18:21:57.682468 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:21:57 crc kubenswrapper[4853]: E1209 18:21:57.683717 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:21:58 crc kubenswrapper[4853]: I1209 18:21:58.567287 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:21:58 crc kubenswrapper[4853]: E1209 18:21:58.567712 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:22:04 crc kubenswrapper[4853]: E1209 18:22:04.571000 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:22:12 crc kubenswrapper[4853]: I1209 18:22:12.567858 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:22:12 crc kubenswrapper[4853]: E1209 18:22:12.568690 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:22:12 crc kubenswrapper[4853]: E1209 18:22:12.572259 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.072246 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-md7sp"] Dec 09 18:22:16 crc kubenswrapper[4853]: E1209 18:22:16.074347 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4405dedc-0001-49ab-afdc-29cb16bded14" containerName="extract-utilities" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.074461 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4405dedc-0001-49ab-afdc-29cb16bded14" containerName="extract-utilities" Dec 09 18:22:16 crc kubenswrapper[4853]: E1209 18:22:16.074571 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4405dedc-0001-49ab-afdc-29cb16bded14" containerName="registry-server" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.074715 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4405dedc-0001-49ab-afdc-29cb16bded14" containerName="registry-server" Dec 09 18:22:16 crc kubenswrapper[4853]: E1209 18:22:16.074844 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4405dedc-0001-49ab-afdc-29cb16bded14" containerName="extract-content" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.074949 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4405dedc-0001-49ab-afdc-29cb16bded14" containerName="extract-content" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.075565 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4405dedc-0001-49ab-afdc-29cb16bded14" containerName="registry-server" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.077960 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.096941 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-md7sp"] Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.198861 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-utilities\") pod \"certified-operators-md7sp\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.198982 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvs9\" (UniqueName: \"kubernetes.io/projected/f7627429-6f1b-4e5c-9c8a-f209300b71be-kube-api-access-rnvs9\") pod \"certified-operators-md7sp\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.199081 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-catalog-content\") pod \"certified-operators-md7sp\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.301398 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-utilities\") pod \"certified-operators-md7sp\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.301638 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnvs9\" (UniqueName: \"kubernetes.io/projected/f7627429-6f1b-4e5c-9c8a-f209300b71be-kube-api-access-rnvs9\") pod \"certified-operators-md7sp\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.301821 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-catalog-content\") pod \"certified-operators-md7sp\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.302456 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-catalog-content\") pod \"certified-operators-md7sp\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.302786 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-utilities\") pod \"certified-operators-md7sp\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:16 crc kubenswrapper[4853]: I1209 18:22:16.885430 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnvs9\" (UniqueName: \"kubernetes.io/projected/f7627429-6f1b-4e5c-9c8a-f209300b71be-kube-api-access-rnvs9\") pod \"certified-operators-md7sp\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:17 crc kubenswrapper[4853]: I1209 18:22:17.000317 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:17 crc kubenswrapper[4853]: I1209 18:22:17.547224 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-md7sp"] Dec 09 18:22:18 crc kubenswrapper[4853]: I1209 18:22:18.000484 4853 generic.go:334] "Generic (PLEG): container finished" podID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerID="41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce" exitCode=0 Dec 09 18:22:18 crc kubenswrapper[4853]: I1209 18:22:18.000840 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md7sp" event={"ID":"f7627429-6f1b-4e5c-9c8a-f209300b71be","Type":"ContainerDied","Data":"41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce"} Dec 09 18:22:18 crc kubenswrapper[4853]: I1209 18:22:18.000870 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md7sp" event={"ID":"f7627429-6f1b-4e5c-9c8a-f209300b71be","Type":"ContainerStarted","Data":"91b1e84b525de27f55eb454a4704e8e563e87616fb1ef2ad05b608b891350fe7"} Dec 09 18:22:19 crc kubenswrapper[4853]: E1209 18:22:19.574932 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:22:20 crc kubenswrapper[4853]: I1209 18:22:20.029107 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md7sp" event={"ID":"f7627429-6f1b-4e5c-9c8a-f209300b71be","Type":"ContainerStarted","Data":"ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517"} Dec 09 18:22:21 crc kubenswrapper[4853]: I1209 18:22:21.043274 4853 generic.go:334] "Generic (PLEG): container finished" podID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerID="ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517" exitCode=0 Dec 09 18:22:21 crc kubenswrapper[4853]: I1209 18:22:21.043325 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md7sp" event={"ID":"f7627429-6f1b-4e5c-9c8a-f209300b71be","Type":"ContainerDied","Data":"ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517"} Dec 09 18:22:22 crc kubenswrapper[4853]: I1209 18:22:22.055570 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md7sp" event={"ID":"f7627429-6f1b-4e5c-9c8a-f209300b71be","Type":"ContainerStarted","Data":"d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289"} Dec 09 18:22:22 crc kubenswrapper[4853]: I1209 18:22:22.092298 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-md7sp" podStartSLOduration=2.520396096 podStartE2EDuration="6.092269509s" podCreationTimestamp="2025-12-09 18:22:16 +0000 UTC" firstStartedPulling="2025-12-09 18:22:18.00332808 +0000 UTC m=+5164.938067272" lastFinishedPulling="2025-12-09 18:22:21.575201503 +0000 UTC m=+5168.509940685" observedRunningTime="2025-12-09 18:22:22.086895633 +0000 UTC m=+5169.021634835" watchObservedRunningTime="2025-12-09 18:22:22.092269509 +0000 UTC m=+5169.027008701" Dec 09 18:22:24 crc kubenswrapper[4853]: I1209 18:22:24.568001 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:22:24 crc kubenswrapper[4853]: E1209 18:22:24.568732 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:22:26 crc kubenswrapper[4853]: E1209 18:22:26.572866 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:22:27 crc kubenswrapper[4853]: I1209 18:22:27.001024 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:27 crc kubenswrapper[4853]: I1209 18:22:27.001428 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:27 crc kubenswrapper[4853]: I1209 18:22:27.071443 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:27 crc kubenswrapper[4853]: I1209 18:22:27.204881 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:27 crc kubenswrapper[4853]: I1209 18:22:27.330875 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-md7sp"] Dec 09 18:22:29 crc kubenswrapper[4853]: I1209 18:22:29.158358 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-md7sp" podUID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerName="registry-server" containerID="cri-o://d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289" gracePeriod=2 Dec 09 18:22:29 crc kubenswrapper[4853]: E1209 18:22:29.356919 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7627429_6f1b_4e5c_9c8a_f209300b71be.slice/crio-d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289.scope\": RecentStats: unable to find data in memory cache]" Dec 09 18:22:29 crc kubenswrapper[4853]: I1209 18:22:29.734262 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:29 crc kubenswrapper[4853]: I1209 18:22:29.898283 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnvs9\" (UniqueName: \"kubernetes.io/projected/f7627429-6f1b-4e5c-9c8a-f209300b71be-kube-api-access-rnvs9\") pod \"f7627429-6f1b-4e5c-9c8a-f209300b71be\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " Dec 09 18:22:29 crc kubenswrapper[4853]: I1209 18:22:29.898755 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-utilities\") pod \"f7627429-6f1b-4e5c-9c8a-f209300b71be\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " Dec 09 18:22:29 crc kubenswrapper[4853]: I1209 18:22:29.898941 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-catalog-content\") pod \"f7627429-6f1b-4e5c-9c8a-f209300b71be\" (UID: \"f7627429-6f1b-4e5c-9c8a-f209300b71be\") " Dec 09 18:22:29 crc kubenswrapper[4853]: I1209 18:22:29.899970 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-utilities" (OuterVolumeSpecName: "utilities") pod "f7627429-6f1b-4e5c-9c8a-f209300b71be" (UID: "f7627429-6f1b-4e5c-9c8a-f209300b71be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:22:29 crc kubenswrapper[4853]: I1209 18:22:29.907907 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7627429-6f1b-4e5c-9c8a-f209300b71be-kube-api-access-rnvs9" (OuterVolumeSpecName: "kube-api-access-rnvs9") pod "f7627429-6f1b-4e5c-9c8a-f209300b71be" (UID: "f7627429-6f1b-4e5c-9c8a-f209300b71be"). InnerVolumeSpecName "kube-api-access-rnvs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:22:29 crc kubenswrapper[4853]: I1209 18:22:29.980813 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7627429-6f1b-4e5c-9c8a-f209300b71be" (UID: "f7627429-6f1b-4e5c-9c8a-f209300b71be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.002338 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.002393 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7627429-6f1b-4e5c-9c8a-f209300b71be-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.002409 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnvs9\" (UniqueName: \"kubernetes.io/projected/f7627429-6f1b-4e5c-9c8a-f209300b71be-kube-api-access-rnvs9\") on node \"crc\" DevicePath \"\"" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.176960 4853 generic.go:334] "Generic (PLEG): container finished" podID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerID="d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289" exitCode=0 Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.177023 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md7sp" event={"ID":"f7627429-6f1b-4e5c-9c8a-f209300b71be","Type":"ContainerDied","Data":"d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289"} Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.177063 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-md7sp" event={"ID":"f7627429-6f1b-4e5c-9c8a-f209300b71be","Type":"ContainerDied","Data":"91b1e84b525de27f55eb454a4704e8e563e87616fb1ef2ad05b608b891350fe7"} Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.177091 4853 scope.go:117] "RemoveContainer" containerID="d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.177290 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-md7sp" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.208057 4853 scope.go:117] "RemoveContainer" containerID="ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.248739 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-md7sp"] Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.260969 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-md7sp"] Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.276222 4853 scope.go:117] "RemoveContainer" containerID="41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.344393 4853 scope.go:117] "RemoveContainer" containerID="d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289" Dec 09 18:22:30 crc kubenswrapper[4853]: E1209 18:22:30.345058 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289\": container with ID starting with d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289 not found: ID does not exist" containerID="d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.345119 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289"} err="failed to get container status \"d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289\": rpc error: code = NotFound desc = could not find container \"d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289\": container with ID starting with d714165e8f8fa5c8cce5c2dc8666f90ecd465fe0f559f085364ebeab9ccbc289 not found: ID does not exist" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.345146 4853 scope.go:117] "RemoveContainer" containerID="ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517" Dec 09 18:22:30 crc kubenswrapper[4853]: E1209 18:22:30.345639 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517\": container with ID starting with ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517 not found: ID does not exist" containerID="ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.345713 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517"} err="failed to get container status \"ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517\": rpc error: code = NotFound desc = could not find container \"ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517\": container with ID starting with ecb3985cc5ed3bad9986bfdf2c41d57cc94f793b89916b14fb35fb2166e91517 not found: ID does not exist" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.345758 4853 scope.go:117] "RemoveContainer" containerID="41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce" Dec 09 18:22:30 crc kubenswrapper[4853]: E1209 18:22:30.346273 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce\": container with ID starting with 41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce not found: ID does not exist" containerID="41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce" Dec 09 18:22:30 crc kubenswrapper[4853]: I1209 18:22:30.346365 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce"} err="failed to get container status \"41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce\": rpc error: code = NotFound desc = could not find container \"41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce\": container with ID starting with 41c615f02ceba270edd7d680bc3f08822bf48919d261b40dd498822ac1954dce not found: ID does not exist" Dec 09 18:22:30 crc kubenswrapper[4853]: E1209 18:22:30.568768 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:22:31 crc kubenswrapper[4853]: I1209 18:22:31.594010 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7627429-6f1b-4e5c-9c8a-f209300b71be" path="/var/lib/kubelet/pods/f7627429-6f1b-4e5c-9c8a-f209300b71be/volumes" Dec 09 18:22:38 crc kubenswrapper[4853]: I1209 18:22:38.567781 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:22:38 crc kubenswrapper[4853]: E1209 18:22:38.568952 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:22:40 crc kubenswrapper[4853]: E1209 18:22:40.569932 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:22:44 crc kubenswrapper[4853]: E1209 18:22:44.570313 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:22:51 crc kubenswrapper[4853]: I1209 18:22:51.568563 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:22:51 crc kubenswrapper[4853]: E1209 18:22:51.571449 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:22:55 crc kubenswrapper[4853]: E1209 18:22:55.571401 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:22:58 crc kubenswrapper[4853]: E1209 18:22:58.569292 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:23:06 crc kubenswrapper[4853]: I1209 18:23:06.569441 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:23:06 crc kubenswrapper[4853]: E1209 18:23:06.570930 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:23:08 crc kubenswrapper[4853]: E1209 18:23:08.570581 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:23:10 crc kubenswrapper[4853]: E1209 18:23:10.570386 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:23:17 crc kubenswrapper[4853]: I1209 18:23:17.568176 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:23:17 crc kubenswrapper[4853]: E1209 18:23:17.569476 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:23:22 crc kubenswrapper[4853]: E1209 18:23:22.573019 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:23:23 crc kubenswrapper[4853]: E1209 18:23:23.588864 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:23:32 crc kubenswrapper[4853]: I1209 18:23:32.567708 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:23:32 crc kubenswrapper[4853]: E1209 18:23:32.569667 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:23:34 crc kubenswrapper[4853]: E1209 18:23:34.569980 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:23:36 crc kubenswrapper[4853]: E1209 18:23:36.569769 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:23:43 crc kubenswrapper[4853]: I1209 18:23:43.586818 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:23:43 crc kubenswrapper[4853]: E1209 18:23:43.587760 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:23:47 crc kubenswrapper[4853]: E1209 18:23:47.570824 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:23:48 crc kubenswrapper[4853]: E1209 18:23:48.569662 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:23:57 crc kubenswrapper[4853]: I1209 18:23:57.567659 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:23:57 crc kubenswrapper[4853]: E1209 18:23:57.568862 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:23:58 crc kubenswrapper[4853]: E1209 18:23:58.571085 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:24:00 crc kubenswrapper[4853]: E1209 18:24:00.570418 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:24:10 crc kubenswrapper[4853]: I1209 18:24:10.567933 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:24:10 crc kubenswrapper[4853]: E1209 18:24:10.568838 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:24:11 crc kubenswrapper[4853]: E1209 18:24:11.570760 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:24:12 crc kubenswrapper[4853]: E1209 18:24:12.570357 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:24:21 crc kubenswrapper[4853]: I1209 18:24:21.567792 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:24:21 crc kubenswrapper[4853]: E1209 18:24:21.568971 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:24:24 crc kubenswrapper[4853]: E1209 18:24:24.570230 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:24:25 crc kubenswrapper[4853]: E1209 18:24:25.569753 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:24:35 crc kubenswrapper[4853]: I1209 18:24:35.567956 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:24:35 crc kubenswrapper[4853]: E1209 18:24:35.569053 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:24:36 crc kubenswrapper[4853]: E1209 18:24:36.568770 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:24:39 crc kubenswrapper[4853]: E1209 18:24:39.570440 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:24:48 crc kubenswrapper[4853]: I1209 18:24:48.568405 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:24:48 crc kubenswrapper[4853]: E1209 18:24:48.569792 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:24:50 crc kubenswrapper[4853]: E1209 18:24:50.571650 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:24:53 crc kubenswrapper[4853]: E1209 18:24:53.585691 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:25:01 crc kubenswrapper[4853]: I1209 18:25:01.567500 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:25:01 crc kubenswrapper[4853]: E1209 18:25:01.568701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:25:03 crc kubenswrapper[4853]: E1209 18:25:03.585441 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:25:06 crc kubenswrapper[4853]: E1209 18:25:06.570995 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:25:15 crc kubenswrapper[4853]: I1209 18:25:15.566721 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:25:15 crc kubenswrapper[4853]: E1209 18:25:15.567522 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:25:16 crc kubenswrapper[4853]: E1209 18:25:16.570256 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:25:21 crc kubenswrapper[4853]: E1209 18:25:21.571244 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:25:30 crc kubenswrapper[4853]: I1209 18:25:30.567881 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:25:30 crc kubenswrapper[4853]: E1209 18:25:30.568702 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:25:31 crc kubenswrapper[4853]: E1209 18:25:31.571062 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:25:35 crc kubenswrapper[4853]: E1209 18:25:35.570316 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:25:41 crc kubenswrapper[4853]: I1209 18:25:41.568127 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:25:41 crc kubenswrapper[4853]: E1209 18:25:41.569473 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:25:46 crc kubenswrapper[4853]: E1209 18:25:46.570343 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:25:48 crc kubenswrapper[4853]: E1209 18:25:48.574750 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:25:54 crc kubenswrapper[4853]: I1209 18:25:54.567009 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:25:54 crc kubenswrapper[4853]: E1209 18:25:54.567964 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:25:56 crc kubenswrapper[4853]: I1209 18:25:56.002892 4853 generic.go:334] "Generic (PLEG): container finished" podID="dbac8a22-f72a-4467-ae1f-1d93430b4049" containerID="9bce7ee87a10f5ace6884ee1564735ee157803b0893231da4e127ebad8efbc66" exitCode=2 Dec 09 18:25:56 crc kubenswrapper[4853]: I1209 18:25:56.002970 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" event={"ID":"dbac8a22-f72a-4467-ae1f-1d93430b4049","Type":"ContainerDied","Data":"9bce7ee87a10f5ace6884ee1564735ee157803b0893231da4e127ebad8efbc66"} Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.517556 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.624525 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x7fn\" (UniqueName: \"kubernetes.io/projected/dbac8a22-f72a-4467-ae1f-1d93430b4049-kube-api-access-8x7fn\") pod \"dbac8a22-f72a-4467-ae1f-1d93430b4049\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.624631 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-inventory\") pod \"dbac8a22-f72a-4467-ae1f-1d93430b4049\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.624764 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-ssh-key\") pod \"dbac8a22-f72a-4467-ae1f-1d93430b4049\" (UID: \"dbac8a22-f72a-4467-ae1f-1d93430b4049\") " Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.638468 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbac8a22-f72a-4467-ae1f-1d93430b4049-kube-api-access-8x7fn" (OuterVolumeSpecName: "kube-api-access-8x7fn") pod "dbac8a22-f72a-4467-ae1f-1d93430b4049" (UID: "dbac8a22-f72a-4467-ae1f-1d93430b4049"). InnerVolumeSpecName "kube-api-access-8x7fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.689776 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dbac8a22-f72a-4467-ae1f-1d93430b4049" (UID: "dbac8a22-f72a-4467-ae1f-1d93430b4049"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.695720 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-inventory" (OuterVolumeSpecName: "inventory") pod "dbac8a22-f72a-4467-ae1f-1d93430b4049" (UID: "dbac8a22-f72a-4467-ae1f-1d93430b4049"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.728060 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x7fn\" (UniqueName: \"kubernetes.io/projected/dbac8a22-f72a-4467-ae1f-1d93430b4049-kube-api-access-8x7fn\") on node \"crc\" DevicePath \"\"" Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.728089 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 18:25:57 crc kubenswrapper[4853]: I1209 18:25:57.728099 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbac8a22-f72a-4467-ae1f-1d93430b4049-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 18:25:58 crc kubenswrapper[4853]: I1209 18:25:58.027183 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" event={"ID":"dbac8a22-f72a-4467-ae1f-1d93430b4049","Type":"ContainerDied","Data":"8acf619d8a9d0a06f207971590c0ed3fddeeb7f52b0f1f86fcba3604d7b26a91"} Dec 09 18:25:58 crc kubenswrapper[4853]: I1209 18:25:58.027238 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8acf619d8a9d0a06f207971590c0ed3fddeeb7f52b0f1f86fcba3604d7b26a91" Dec 09 18:25:58 crc kubenswrapper[4853]: I1209 18:25:58.027263 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8" Dec 09 18:25:59 crc kubenswrapper[4853]: E1209 18:25:59.569465 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:26:00 crc kubenswrapper[4853]: E1209 18:26:00.569294 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.608546 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k9m2p/must-gather-qnmln"] Dec 09 18:26:02 crc kubenswrapper[4853]: E1209 18:26:02.609822 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerName="extract-content" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.609840 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerName="extract-content" Dec 09 18:26:02 crc kubenswrapper[4853]: E1209 18:26:02.609866 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbac8a22-f72a-4467-ae1f-1d93430b4049" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.609876 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbac8a22-f72a-4467-ae1f-1d93430b4049" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 18:26:02 crc kubenswrapper[4853]: E1209 18:26:02.609916 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerName="registry-server" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.609925 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerName="registry-server" Dec 09 18:26:02 crc kubenswrapper[4853]: E1209 18:26:02.609952 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerName="extract-utilities" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.609960 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerName="extract-utilities" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.610245 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbac8a22-f72a-4467-ae1f-1d93430b4049" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.610272 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7627429-6f1b-4e5c-9c8a-f209300b71be" containerName="registry-server" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.611967 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.617991 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-k9m2p"/"default-dockercfg-4jtsc" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.618228 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k9m2p"/"kube-root-ca.crt" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.618388 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k9m2p"/"openshift-service-ca.crt" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.663144 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k9m2p/must-gather-qnmln"] Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.758833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23ab326c-f916-4b00-af22-bf5bdfdbc052-must-gather-output\") pod \"must-gather-qnmln\" (UID: \"23ab326c-f916-4b00-af22-bf5bdfdbc052\") " pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.759472 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgwh\" (UniqueName: \"kubernetes.io/projected/23ab326c-f916-4b00-af22-bf5bdfdbc052-kube-api-access-pdgwh\") pod \"must-gather-qnmln\" (UID: \"23ab326c-f916-4b00-af22-bf5bdfdbc052\") " pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.882382 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgwh\" (UniqueName: \"kubernetes.io/projected/23ab326c-f916-4b00-af22-bf5bdfdbc052-kube-api-access-pdgwh\") pod \"must-gather-qnmln\" (UID: \"23ab326c-f916-4b00-af22-bf5bdfdbc052\") " pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.882552 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23ab326c-f916-4b00-af22-bf5bdfdbc052-must-gather-output\") pod \"must-gather-qnmln\" (UID: \"23ab326c-f916-4b00-af22-bf5bdfdbc052\") " pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.883091 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23ab326c-f916-4b00-af22-bf5bdfdbc052-must-gather-output\") pod \"must-gather-qnmln\" (UID: \"23ab326c-f916-4b00-af22-bf5bdfdbc052\") " pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.906111 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgwh\" (UniqueName: \"kubernetes.io/projected/23ab326c-f916-4b00-af22-bf5bdfdbc052-kube-api-access-pdgwh\") pod \"must-gather-qnmln\" (UID: \"23ab326c-f916-4b00-af22-bf5bdfdbc052\") " pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:26:02 crc kubenswrapper[4853]: I1209 18:26:02.970333 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:26:03 crc kubenswrapper[4853]: I1209 18:26:03.456343 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k9m2p/must-gather-qnmln"] Dec 09 18:26:04 crc kubenswrapper[4853]: I1209 18:26:04.105903 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k9m2p/must-gather-qnmln" event={"ID":"23ab326c-f916-4b00-af22-bf5bdfdbc052","Type":"ContainerStarted","Data":"c8120a2e5ee8d18f23b28b4359602bf050fb5e5c468d8fb8746e187a7a3b93d8"} Dec 09 18:26:09 crc kubenswrapper[4853]: I1209 18:26:09.567635 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:26:09 crc kubenswrapper[4853]: E1209 18:26:09.568788 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:26:12 crc kubenswrapper[4853]: E1209 18:26:12.570025 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:26:12 crc kubenswrapper[4853]: E1209 18:26:12.571078 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:26:15 crc kubenswrapper[4853]: I1209 18:26:15.252717 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k9m2p/must-gather-qnmln" event={"ID":"23ab326c-f916-4b00-af22-bf5bdfdbc052","Type":"ContainerStarted","Data":"17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4"} Dec 09 18:26:15 crc kubenswrapper[4853]: I1209 18:26:15.253277 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k9m2p/must-gather-qnmln" event={"ID":"23ab326c-f916-4b00-af22-bf5bdfdbc052","Type":"ContainerStarted","Data":"11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a"} Dec 09 18:26:15 crc kubenswrapper[4853]: I1209 18:26:15.285120 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k9m2p/must-gather-qnmln" podStartSLOduration=2.472234344 podStartE2EDuration="13.285093197s" podCreationTimestamp="2025-12-09 18:26:02 +0000 UTC" firstStartedPulling="2025-12-09 18:26:03.460207359 +0000 UTC m=+5390.394946541" lastFinishedPulling="2025-12-09 18:26:14.273066212 +0000 UTC m=+5401.207805394" observedRunningTime="2025-12-09 18:26:15.271257493 +0000 UTC m=+5402.205996685" watchObservedRunningTime="2025-12-09 18:26:15.285093197 +0000 UTC m=+5402.219832419" Dec 09 18:26:19 crc kubenswrapper[4853]: I1209 18:26:19.804933 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k9m2p/crc-debug-dvnmm"] Dec 09 18:26:19 crc kubenswrapper[4853]: I1209 18:26:19.807140 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:19 crc kubenswrapper[4853]: I1209 18:26:19.910352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-host\") pod \"crc-debug-dvnmm\" (UID: \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\") " pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:19 crc kubenswrapper[4853]: I1209 18:26:19.910470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kcgl\" (UniqueName: \"kubernetes.io/projected/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-kube-api-access-6kcgl\") pod \"crc-debug-dvnmm\" (UID: \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\") " pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:20 crc kubenswrapper[4853]: I1209 18:26:20.012237 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-host\") pod \"crc-debug-dvnmm\" (UID: \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\") " pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:20 crc kubenswrapper[4853]: I1209 18:26:20.012333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kcgl\" (UniqueName: \"kubernetes.io/projected/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-kube-api-access-6kcgl\") pod \"crc-debug-dvnmm\" (UID: \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\") " pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:20 crc kubenswrapper[4853]: I1209 18:26:20.012479 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-host\") pod \"crc-debug-dvnmm\" (UID: \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\") " pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:20 crc kubenswrapper[4853]: I1209 18:26:20.034687 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kcgl\" (UniqueName: \"kubernetes.io/projected/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-kube-api-access-6kcgl\") pod \"crc-debug-dvnmm\" (UID: \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\") " pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:20 crc kubenswrapper[4853]: I1209 18:26:20.125323 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:20 crc kubenswrapper[4853]: I1209 18:26:20.315918 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" event={"ID":"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7","Type":"ContainerStarted","Data":"6e792e53d6edb94813ed8c1f12f9e930f4b8224f8d874266fe979be67c8514e1"} Dec 09 18:26:20 crc kubenswrapper[4853]: I1209 18:26:20.567016 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:26:20 crc kubenswrapper[4853]: E1209 18:26:20.567522 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:26:23 crc kubenswrapper[4853]: E1209 18:26:23.578749 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:26:27 crc kubenswrapper[4853]: E1209 18:26:27.570129 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:26:32 crc kubenswrapper[4853]: I1209 18:26:32.450947 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" event={"ID":"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7","Type":"ContainerStarted","Data":"236dd3203dbdbcd42c6da671a0ff27a685316cecff52930488a5d013bdb2cf81"} Dec 09 18:26:32 crc kubenswrapper[4853]: I1209 18:26:32.466948 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" podStartSLOduration=1.8286475580000001 podStartE2EDuration="13.466928698s" podCreationTimestamp="2025-12-09 18:26:19 +0000 UTC" firstStartedPulling="2025-12-09 18:26:20.177681284 +0000 UTC m=+5407.112420466" lastFinishedPulling="2025-12-09 18:26:31.815962414 +0000 UTC m=+5418.750701606" observedRunningTime="2025-12-09 18:26:32.464986855 +0000 UTC m=+5419.399726037" watchObservedRunningTime="2025-12-09 18:26:32.466928698 +0000 UTC m=+5419.401667880" Dec 09 18:26:34 crc kubenswrapper[4853]: I1209 18:26:34.568377 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:26:35 crc kubenswrapper[4853]: I1209 18:26:35.483622 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"405135359b7635f5e67016d3c1cff75d7b75670dbb5c667e67e4591d6717b5c7"} Dec 09 18:26:37 crc kubenswrapper[4853]: E1209 18:26:37.570531 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:26:42 crc kubenswrapper[4853]: E1209 18:26:42.571185 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:26:48 crc kubenswrapper[4853]: E1209 18:26:48.573200 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:26:53 crc kubenswrapper[4853]: I1209 18:26:53.737430 4853 generic.go:334] "Generic (PLEG): container finished" podID="03bbbcfa-b25b-4d61-b7be-7bb75deaedc7" containerID="236dd3203dbdbcd42c6da671a0ff27a685316cecff52930488a5d013bdb2cf81" exitCode=0 Dec 09 18:26:53 crc kubenswrapper[4853]: I1209 18:26:53.737514 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" event={"ID":"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7","Type":"ContainerDied","Data":"236dd3203dbdbcd42c6da671a0ff27a685316cecff52930488a5d013bdb2cf81"} Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.363702 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.405069 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k9m2p/crc-debug-dvnmm"] Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.416498 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k9m2p/crc-debug-dvnmm"] Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.473775 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kcgl\" (UniqueName: \"kubernetes.io/projected/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-kube-api-access-6kcgl\") pod \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\" (UID: \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\") " Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.474033 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-host\") pod \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\" (UID: \"03bbbcfa-b25b-4d61-b7be-7bb75deaedc7\") " Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.474639 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-host" (OuterVolumeSpecName: "host") pod "03bbbcfa-b25b-4d61-b7be-7bb75deaedc7" (UID: "03bbbcfa-b25b-4d61-b7be-7bb75deaedc7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.475567 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-host\") on node \"crc\" DevicePath \"\"" Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.495905 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-kube-api-access-6kcgl" (OuterVolumeSpecName: "kube-api-access-6kcgl") pod "03bbbcfa-b25b-4d61-b7be-7bb75deaedc7" (UID: "03bbbcfa-b25b-4d61-b7be-7bb75deaedc7"). InnerVolumeSpecName "kube-api-access-6kcgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.578022 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kcgl\" (UniqueName: \"kubernetes.io/projected/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7-kube-api-access-6kcgl\") on node \"crc\" DevicePath \"\"" Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.580838 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03bbbcfa-b25b-4d61-b7be-7bb75deaedc7" path="/var/lib/kubelet/pods/03bbbcfa-b25b-4d61-b7be-7bb75deaedc7/volumes" Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.761774 4853 scope.go:117] "RemoveContainer" containerID="236dd3203dbdbcd42c6da671a0ff27a685316cecff52930488a5d013bdb2cf81" Dec 09 18:26:55 crc kubenswrapper[4853]: I1209 18:26:55.761813 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/crc-debug-dvnmm" Dec 09 18:26:56 crc kubenswrapper[4853]: E1209 18:26:56.569684 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.619137 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k9m2p/crc-debug-b5cfc"] Dec 09 18:26:56 crc kubenswrapper[4853]: E1209 18:26:56.619628 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03bbbcfa-b25b-4d61-b7be-7bb75deaedc7" containerName="container-00" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.619641 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="03bbbcfa-b25b-4d61-b7be-7bb75deaedc7" containerName="container-00" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.619884 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="03bbbcfa-b25b-4d61-b7be-7bb75deaedc7" containerName="container-00" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.620629 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.705087 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gws7\" (UniqueName: \"kubernetes.io/projected/35263bd8-0186-4bd2-a05d-c674892dd116-kube-api-access-8gws7\") pod \"crc-debug-b5cfc\" (UID: \"35263bd8-0186-4bd2-a05d-c674892dd116\") " pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.705665 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35263bd8-0186-4bd2-a05d-c674892dd116-host\") pod \"crc-debug-b5cfc\" (UID: \"35263bd8-0186-4bd2-a05d-c674892dd116\") " pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.807391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gws7\" (UniqueName: \"kubernetes.io/projected/35263bd8-0186-4bd2-a05d-c674892dd116-kube-api-access-8gws7\") pod \"crc-debug-b5cfc\" (UID: \"35263bd8-0186-4bd2-a05d-c674892dd116\") " pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.807647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35263bd8-0186-4bd2-a05d-c674892dd116-host\") pod \"crc-debug-b5cfc\" (UID: \"35263bd8-0186-4bd2-a05d-c674892dd116\") " pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.807815 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35263bd8-0186-4bd2-a05d-c674892dd116-host\") pod \"crc-debug-b5cfc\" (UID: \"35263bd8-0186-4bd2-a05d-c674892dd116\") " pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:26:56 crc kubenswrapper[4853]: I1209 18:26:56.830064 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gws7\" (UniqueName: \"kubernetes.io/projected/35263bd8-0186-4bd2-a05d-c674892dd116-kube-api-access-8gws7\") pod \"crc-debug-b5cfc\" (UID: \"35263bd8-0186-4bd2-a05d-c674892dd116\") " pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:26:57 crc kubenswrapper[4853]: I1209 18:26:57.003772 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:26:57 crc kubenswrapper[4853]: I1209 18:26:57.801358 4853 generic.go:334] "Generic (PLEG): container finished" podID="35263bd8-0186-4bd2-a05d-c674892dd116" containerID="6040827c5220914f15c48fa068d68971da4083d8560466df52ea9f8fc7d63ad4" exitCode=1 Dec 09 18:26:57 crc kubenswrapper[4853]: I1209 18:26:57.801434 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" event={"ID":"35263bd8-0186-4bd2-a05d-c674892dd116","Type":"ContainerDied","Data":"6040827c5220914f15c48fa068d68971da4083d8560466df52ea9f8fc7d63ad4"} Dec 09 18:26:57 crc kubenswrapper[4853]: I1209 18:26:57.801918 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" event={"ID":"35263bd8-0186-4bd2-a05d-c674892dd116","Type":"ContainerStarted","Data":"9ba94387630796c04b61c02d2ed9718d69c1e3c475f9a0519f12dfd7a61be103"} Dec 09 18:26:57 crc kubenswrapper[4853]: I1209 18:26:57.847913 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k9m2p/crc-debug-b5cfc"] Dec 09 18:26:57 crc kubenswrapper[4853]: I1209 18:26:57.858444 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k9m2p/crc-debug-b5cfc"] Dec 09 18:26:58 crc kubenswrapper[4853]: I1209 18:26:58.921146 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:26:58 crc kubenswrapper[4853]: I1209 18:26:58.956028 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35263bd8-0186-4bd2-a05d-c674892dd116-host\") pod \"35263bd8-0186-4bd2-a05d-c674892dd116\" (UID: \"35263bd8-0186-4bd2-a05d-c674892dd116\") " Dec 09 18:26:58 crc kubenswrapper[4853]: I1209 18:26:58.956080 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gws7\" (UniqueName: \"kubernetes.io/projected/35263bd8-0186-4bd2-a05d-c674892dd116-kube-api-access-8gws7\") pod \"35263bd8-0186-4bd2-a05d-c674892dd116\" (UID: \"35263bd8-0186-4bd2-a05d-c674892dd116\") " Dec 09 18:26:58 crc kubenswrapper[4853]: I1209 18:26:58.956120 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35263bd8-0186-4bd2-a05d-c674892dd116-host" (OuterVolumeSpecName: "host") pod "35263bd8-0186-4bd2-a05d-c674892dd116" (UID: "35263bd8-0186-4bd2-a05d-c674892dd116"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 18:26:58 crc kubenswrapper[4853]: I1209 18:26:58.956862 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35263bd8-0186-4bd2-a05d-c674892dd116-host\") on node \"crc\" DevicePath \"\"" Dec 09 18:26:58 crc kubenswrapper[4853]: I1209 18:26:58.961230 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35263bd8-0186-4bd2-a05d-c674892dd116-kube-api-access-8gws7" (OuterVolumeSpecName: "kube-api-access-8gws7") pod "35263bd8-0186-4bd2-a05d-c674892dd116" (UID: "35263bd8-0186-4bd2-a05d-c674892dd116"). InnerVolumeSpecName "kube-api-access-8gws7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:26:59 crc kubenswrapper[4853]: I1209 18:26:59.059361 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gws7\" (UniqueName: \"kubernetes.io/projected/35263bd8-0186-4bd2-a05d-c674892dd116-kube-api-access-8gws7\") on node \"crc\" DevicePath \"\"" Dec 09 18:26:59 crc kubenswrapper[4853]: I1209 18:26:59.570749 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 18:26:59 crc kubenswrapper[4853]: I1209 18:26:59.587551 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35263bd8-0186-4bd2-a05d-c674892dd116" path="/var/lib/kubelet/pods/35263bd8-0186-4bd2-a05d-c674892dd116/volumes" Dec 09 18:26:59 crc kubenswrapper[4853]: E1209 18:26:59.701408 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:26:59 crc kubenswrapper[4853]: E1209 18:26:59.701814 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:26:59 crc kubenswrapper[4853]: E1209 18:26:59.701955 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:26:59 crc kubenswrapper[4853]: E1209 18:26:59.703098 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:26:59 crc kubenswrapper[4853]: I1209 18:26:59.820554 4853 scope.go:117] "RemoveContainer" containerID="6040827c5220914f15c48fa068d68971da4083d8560466df52ea9f8fc7d63ad4" Dec 09 18:26:59 crc kubenswrapper[4853]: I1209 18:26:59.820623 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/crc-debug-b5cfc" Dec 09 18:27:08 crc kubenswrapper[4853]: E1209 18:27:08.715511 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:27:08 crc kubenswrapper[4853]: E1209 18:27:08.715958 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:27:08 crc kubenswrapper[4853]: E1209 18:27:08.716086 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:27:08 crc kubenswrapper[4853]: E1209 18:27:08.717507 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.300938 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ljcm"] Dec 09 18:27:11 crc kubenswrapper[4853]: E1209 18:27:11.302190 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35263bd8-0186-4bd2-a05d-c674892dd116" containerName="container-00" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.302210 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="35263bd8-0186-4bd2-a05d-c674892dd116" containerName="container-00" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.302556 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="35263bd8-0186-4bd2-a05d-c674892dd116" containerName="container-00" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.304786 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.321227 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ljcm"] Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.406452 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkm4j\" (UniqueName: \"kubernetes.io/projected/50310369-ae81-4efd-bf04-39deb9c9a3cc-kube-api-access-qkm4j\") pod \"redhat-operators-4ljcm\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.406906 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-utilities\") pod \"redhat-operators-4ljcm\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.407041 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-catalog-content\") pod \"redhat-operators-4ljcm\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.509261 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-utilities\") pod \"redhat-operators-4ljcm\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.509339 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-catalog-content\") pod \"redhat-operators-4ljcm\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.509551 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkm4j\" (UniqueName: \"kubernetes.io/projected/50310369-ae81-4efd-bf04-39deb9c9a3cc-kube-api-access-qkm4j\") pod \"redhat-operators-4ljcm\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.509926 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-utilities\") pod \"redhat-operators-4ljcm\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.510046 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-catalog-content\") pod \"redhat-operators-4ljcm\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.884904 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkm4j\" (UniqueName: \"kubernetes.io/projected/50310369-ae81-4efd-bf04-39deb9c9a3cc-kube-api-access-qkm4j\") pod \"redhat-operators-4ljcm\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:11 crc kubenswrapper[4853]: I1209 18:27:11.939683 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:12 crc kubenswrapper[4853]: I1209 18:27:12.537549 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ljcm"] Dec 09 18:27:12 crc kubenswrapper[4853]: I1209 18:27:12.992844 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljcm" event={"ID":"50310369-ae81-4efd-bf04-39deb9c9a3cc","Type":"ContainerStarted","Data":"fadf5c9662de8239d72afeed7c3159f96516bf875f2e0c938eb9b53ee98a5f87"} Dec 09 18:27:12 crc kubenswrapper[4853]: I1209 18:27:12.993139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljcm" event={"ID":"50310369-ae81-4efd-bf04-39deb9c9a3cc","Type":"ContainerStarted","Data":"a4d6346416dde626521433c9a6400a6628370730b1e86b6ef40e36096ee30c68"} Dec 09 18:27:14 crc kubenswrapper[4853]: I1209 18:27:14.008325 4853 generic.go:334] "Generic (PLEG): container finished" podID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerID="fadf5c9662de8239d72afeed7c3159f96516bf875f2e0c938eb9b53ee98a5f87" exitCode=0 Dec 09 18:27:14 crc kubenswrapper[4853]: I1209 18:27:14.008435 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljcm" event={"ID":"50310369-ae81-4efd-bf04-39deb9c9a3cc","Type":"ContainerDied","Data":"fadf5c9662de8239d72afeed7c3159f96516bf875f2e0c938eb9b53ee98a5f87"} Dec 09 18:27:14 crc kubenswrapper[4853]: E1209 18:27:14.569394 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:27:16 crc kubenswrapper[4853]: I1209 18:27:16.034968 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljcm" event={"ID":"50310369-ae81-4efd-bf04-39deb9c9a3cc","Type":"ContainerStarted","Data":"ad62174e39ad3cf63b9411c9d33a9b3a0b33413feb2daa3a7bdbb35cfd9fc1e1"} Dec 09 18:27:20 crc kubenswrapper[4853]: E1209 18:27:20.570273 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:27:23 crc kubenswrapper[4853]: I1209 18:27:23.137894 4853 generic.go:334] "Generic (PLEG): container finished" podID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerID="ad62174e39ad3cf63b9411c9d33a9b3a0b33413feb2daa3a7bdbb35cfd9fc1e1" exitCode=0 Dec 09 18:27:23 crc kubenswrapper[4853]: I1209 18:27:23.138095 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljcm" event={"ID":"50310369-ae81-4efd-bf04-39deb9c9a3cc","Type":"ContainerDied","Data":"ad62174e39ad3cf63b9411c9d33a9b3a0b33413feb2daa3a7bdbb35cfd9fc1e1"} Dec 09 18:27:24 crc kubenswrapper[4853]: I1209 18:27:24.150585 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljcm" event={"ID":"50310369-ae81-4efd-bf04-39deb9c9a3cc","Type":"ContainerStarted","Data":"93439d86b7905f22bff6f88c2e7f8e3874030783abef6e75be8ed0cf7521bdf6"} Dec 09 18:27:24 crc kubenswrapper[4853]: I1209 18:27:24.172175 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ljcm" podStartSLOduration=3.566169876 podStartE2EDuration="13.172141362s" podCreationTimestamp="2025-12-09 18:27:11 +0000 UTC" firstStartedPulling="2025-12-09 18:27:14.011369743 +0000 UTC m=+5460.946108915" lastFinishedPulling="2025-12-09 18:27:23.617341219 +0000 UTC m=+5470.552080401" observedRunningTime="2025-12-09 18:27:24.166241852 +0000 UTC m=+5471.100981044" watchObservedRunningTime="2025-12-09 18:27:24.172141362 +0000 UTC m=+5471.106880544" Dec 09 18:27:29 crc kubenswrapper[4853]: E1209 18:27:29.570046 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:27:31 crc kubenswrapper[4853]: I1209 18:27:31.940785 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:31 crc kubenswrapper[4853]: I1209 18:27:31.941132 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:32 crc kubenswrapper[4853]: I1209 18:27:32.035267 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:32 crc kubenswrapper[4853]: I1209 18:27:32.283314 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:32 crc kubenswrapper[4853]: I1209 18:27:32.341616 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ljcm"] Dec 09 18:27:34 crc kubenswrapper[4853]: I1209 18:27:34.244509 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4ljcm" podUID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerName="registry-server" containerID="cri-o://93439d86b7905f22bff6f88c2e7f8e3874030783abef6e75be8ed0cf7521bdf6" gracePeriod=2 Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.264289 4853 generic.go:334] "Generic (PLEG): container finished" podID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerID="93439d86b7905f22bff6f88c2e7f8e3874030783abef6e75be8ed0cf7521bdf6" exitCode=0 Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.264386 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljcm" event={"ID":"50310369-ae81-4efd-bf04-39deb9c9a3cc","Type":"ContainerDied","Data":"93439d86b7905f22bff6f88c2e7f8e3874030783abef6e75be8ed0cf7521bdf6"} Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.264715 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljcm" event={"ID":"50310369-ae81-4efd-bf04-39deb9c9a3cc","Type":"ContainerDied","Data":"a4d6346416dde626521433c9a6400a6628370730b1e86b6ef40e36096ee30c68"} Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.264733 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4d6346416dde626521433c9a6400a6628370730b1e86b6ef40e36096ee30c68" Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.283180 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.398028 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-catalog-content\") pod \"50310369-ae81-4efd-bf04-39deb9c9a3cc\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.398192 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-utilities\") pod \"50310369-ae81-4efd-bf04-39deb9c9a3cc\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.398219 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkm4j\" (UniqueName: \"kubernetes.io/projected/50310369-ae81-4efd-bf04-39deb9c9a3cc-kube-api-access-qkm4j\") pod \"50310369-ae81-4efd-bf04-39deb9c9a3cc\" (UID: \"50310369-ae81-4efd-bf04-39deb9c9a3cc\") " Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.399932 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-utilities" (OuterVolumeSpecName: "utilities") pod "50310369-ae81-4efd-bf04-39deb9c9a3cc" (UID: "50310369-ae81-4efd-bf04-39deb9c9a3cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.406970 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50310369-ae81-4efd-bf04-39deb9c9a3cc-kube-api-access-qkm4j" (OuterVolumeSpecName: "kube-api-access-qkm4j") pod "50310369-ae81-4efd-bf04-39deb9c9a3cc" (UID: "50310369-ae81-4efd-bf04-39deb9c9a3cc"). InnerVolumeSpecName "kube-api-access-qkm4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.503620 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.503931 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkm4j\" (UniqueName: \"kubernetes.io/projected/50310369-ae81-4efd-bf04-39deb9c9a3cc-kube-api-access-qkm4j\") on node \"crc\" DevicePath \"\"" Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.540735 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50310369-ae81-4efd-bf04-39deb9c9a3cc" (UID: "50310369-ae81-4efd-bf04-39deb9c9a3cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:27:35 crc kubenswrapper[4853]: E1209 18:27:35.570176 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:27:35 crc kubenswrapper[4853]: I1209 18:27:35.606557 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50310369-ae81-4efd-bf04-39deb9c9a3cc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:27:36 crc kubenswrapper[4853]: I1209 18:27:36.272890 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ljcm" Dec 09 18:27:36 crc kubenswrapper[4853]: I1209 18:27:36.300276 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ljcm"] Dec 09 18:27:36 crc kubenswrapper[4853]: I1209 18:27:36.310565 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4ljcm"] Dec 09 18:27:37 crc kubenswrapper[4853]: I1209 18:27:37.579659 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50310369-ae81-4efd-bf04-39deb9c9a3cc" path="/var/lib/kubelet/pods/50310369-ae81-4efd-bf04-39deb9c9a3cc/volumes" Dec 09 18:27:41 crc kubenswrapper[4853]: E1209 18:27:41.570693 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:27:48 crc kubenswrapper[4853]: E1209 18:27:48.571369 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:27:53 crc kubenswrapper[4853]: I1209 18:27:53.374766 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_67d920ff-76fb-42a7-aff2-252d556a1d10/aodh-api/0.log" Dec 09 18:27:53 crc kubenswrapper[4853]: I1209 18:27:53.558206 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_67d920ff-76fb-42a7-aff2-252d556a1d10/aodh-evaluator/0.log" Dec 09 18:27:53 crc kubenswrapper[4853]: I1209 18:27:53.573880 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_67d920ff-76fb-42a7-aff2-252d556a1d10/aodh-listener/0.log" Dec 09 18:27:53 crc kubenswrapper[4853]: I1209 18:27:53.658694 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_67d920ff-76fb-42a7-aff2-252d556a1d10/aodh-notifier/0.log" Dec 09 18:27:53 crc kubenswrapper[4853]: I1209 18:27:53.741781 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-767f9884cb-lxgw6_08bb4256-cdbf-4359-a670-8cfc13b8af47/barbican-api/0.log" Dec 09 18:27:53 crc kubenswrapper[4853]: I1209 18:27:53.769763 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-767f9884cb-lxgw6_08bb4256-cdbf-4359-a670-8cfc13b8af47/barbican-api-log/0.log" Dec 09 18:27:53 crc kubenswrapper[4853]: I1209 18:27:53.919000 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b4ddcfc8d-7ttth_d41a353c-83cf-4482-9984-5197c7709ced/barbican-keystone-listener/0.log" Dec 09 18:27:53 crc kubenswrapper[4853]: I1209 18:27:53.966755 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b4ddcfc8d-7ttth_d41a353c-83cf-4482-9984-5197c7709ced/barbican-keystone-listener-log/0.log" Dec 09 18:27:54 crc kubenswrapper[4853]: I1209 18:27:54.068931 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-57459d985f-7pt4c_281434f1-0f91-404d-8f13-2bbf97b18237/barbican-worker/0.log" Dec 09 18:27:54 crc kubenswrapper[4853]: I1209 18:27:54.124042 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-57459d985f-7pt4c_281434f1-0f91-404d-8f13-2bbf97b18237/barbican-worker-log/0.log" Dec 09 18:27:54 crc kubenswrapper[4853]: I1209 18:27:54.282978 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-lz29p_a9d8c808-933c-4c72-b6a2-8fd9371629ef/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.016056 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e815965-15fe-4f84-8eb4-133f91163a08/proxy-httpd/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.017270 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e815965-15fe-4f84-8eb4-133f91163a08/ceilometer-notification-agent/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.052552 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6e815965-15fe-4f84-8eb4-133f91163a08/sg-core/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.202047 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_23f2bd57-bac0-42fd-8203-0fd8f3720109/cinder-api-log/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.270665 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_23f2bd57-bac0-42fd-8203-0fd8f3720109/cinder-api/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.447231 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_55f413b4-3b77-4e15-97f8-1cedee56a118/cinder-scheduler/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.519843 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_55f413b4-3b77-4e15-97f8-1cedee56a118/probe/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.530609 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-q2qlg_a2d602a5-68a3-4b5a-825b-3313e3e85c0e/init/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: E1209 18:27:55.569981 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.709924 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-q2qlg_a2d602a5-68a3-4b5a-825b-3313e3e85c0e/init/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.717266 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-q2qlg_a2d602a5-68a3-4b5a-825b-3313e3e85c0e/dnsmasq-dns/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.821418 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9hdkw_7796c327-5952-4b15-a864-511d8f1c75d6/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:27:55 crc kubenswrapper[4853]: I1209 18:27:55.941793 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-d9t66_4f764ae7-2150-4081-9763-a0ef9ce1640f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:27:56 crc kubenswrapper[4853]: I1209 18:27:56.025716 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kqdhc_a6917b95-0219-402f-8309-76ad558f9756/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:27:56 crc kubenswrapper[4853]: I1209 18:27:56.148836 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n2b4x_fb81bc18-40f0-48b1-94a1-c0f4ca35e36c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:27:56 crc kubenswrapper[4853]: I1209 18:27:56.916345 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nlf4v_4a5b926b-fd04-4fbb-b526-3d2e14c6e2f9/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:27:56 crc kubenswrapper[4853]: I1209 18:27:56.948305 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pk9k8_dbac8a22-f72a-4467-ae1f-1d93430b4049/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:27:57 crc kubenswrapper[4853]: I1209 18:27:57.185933 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zgkzf_0f19467b-be6d-4600-8e1e-4bcb5627e44f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:27:57 crc kubenswrapper[4853]: I1209 18:27:57.228259 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f508ab5e-133f-469f-9791-3444c10fc527/glance-log/0.log" Dec 09 18:27:57 crc kubenswrapper[4853]: I1209 18:27:57.245986 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f508ab5e-133f-469f-9791-3444c10fc527/glance-httpd/0.log" Dec 09 18:27:57 crc kubenswrapper[4853]: I1209 18:27:57.420664 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b9725a66-09e4-4b83-9919-34cb10d5ed3f/glance-log/0.log" Dec 09 18:27:57 crc kubenswrapper[4853]: I1209 18:27:57.433497 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b9725a66-09e4-4b83-9919-34cb10d5ed3f/glance-httpd/0.log" Dec 09 18:27:57 crc kubenswrapper[4853]: I1209 18:27:57.998121 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-8648cdff54-k9lsq_2b00ae75-9222-4cf4-a896-27abacad2ae0/heat-engine/0.log" Dec 09 18:27:58 crc kubenswrapper[4853]: I1209 18:27:58.126271 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6b4ff486d-bdppg_33db0244-869a-4927-87c3-092a2aae9d4a/heat-api/0.log" Dec 09 18:27:58 crc kubenswrapper[4853]: I1209 18:27:58.176976 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-fb7c95bc-v962c_40b99c71-7761-4134-b7f4-021564f209f5/heat-cfnapi/0.log" Dec 09 18:27:58 crc kubenswrapper[4853]: I1209 18:27:58.182391 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29421721-8srlv_160001ba-a330-4185-b1a1-67bfbdda8cd9/keystone-cron/0.log" Dec 09 18:27:58 crc kubenswrapper[4853]: I1209 18:27:58.283027 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-65d74d4db4-2bcdr_e89854ae-ff97-4850-992f-14c38c2e1848/keystone-api/0.log" Dec 09 18:27:58 crc kubenswrapper[4853]: I1209 18:27:58.413734 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_fa877e16-b821-4ef2-8840-806f276e784c/kube-state-metrics/0.log" Dec 09 18:27:58 crc kubenswrapper[4853]: I1209 18:27:58.601184 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_38030366-fb21-422a-8e22-db3aa78915ea/mysqld-exporter/0.log" Dec 09 18:27:58 crc kubenswrapper[4853]: I1209 18:27:58.728685 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-645985f88c-fqxds_481f80f1-29b5-4bdc-af3a-8c6cea94774f/neutron-api/0.log" Dec 09 18:27:58 crc kubenswrapper[4853]: I1209 18:27:58.777344 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-645985f88c-fqxds_481f80f1-29b5-4bdc-af3a-8c6cea94774f/neutron-httpd/0.log" Dec 09 18:27:59 crc kubenswrapper[4853]: I1209 18:27:59.122980 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f03d1537-be82-46ba-a69f-06732942a6e6/nova-api-log/0.log" Dec 09 18:27:59 crc kubenswrapper[4853]: I1209 18:27:59.333390 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b6e8c860-e6af-4b73-a1d4-764a2bc42f39/nova-cell0-conductor-conductor/0.log" Dec 09 18:27:59 crc kubenswrapper[4853]: I1209 18:27:59.443146 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ba91b779-9f16-44fa-97db-a6e51125893a/nova-cell1-conductor-conductor/0.log" Dec 09 18:27:59 crc kubenswrapper[4853]: I1209 18:27:59.528458 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f03d1537-be82-46ba-a69f-06732942a6e6/nova-api-api/0.log" Dec 09 18:27:59 crc kubenswrapper[4853]: I1209 18:27:59.703339 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_49c74f06-d991-4be0-8e3f-4c76350361cd/nova-cell1-novncproxy-novncproxy/0.log" Dec 09 18:27:59 crc kubenswrapper[4853]: I1209 18:27:59.863871 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2c3566a2-2617-4b9d-951e-a124a027c307/nova-metadata-log/0.log" Dec 09 18:28:00 crc kubenswrapper[4853]: I1209 18:28:00.038387 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0f94de49-e6de-4df0-8615-1f037e3c6ac1/nova-scheduler-scheduler/0.log" Dec 09 18:28:00 crc kubenswrapper[4853]: I1209 18:28:00.096663 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5cfbc81f-b48d-4790-a213-10daf9f83287/mysql-bootstrap/0.log" Dec 09 18:28:00 crc kubenswrapper[4853]: I1209 18:28:00.331037 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5cfbc81f-b48d-4790-a213-10daf9f83287/mysql-bootstrap/0.log" Dec 09 18:28:00 crc kubenswrapper[4853]: I1209 18:28:00.336283 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5cfbc81f-b48d-4790-a213-10daf9f83287/galera/0.log" Dec 09 18:28:00 crc kubenswrapper[4853]: I1209 18:28:00.546083 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dc8bc986-e8a8-467c-8e2a-795c26a74de7/mysql-bootstrap/0.log" Dec 09 18:28:00 crc kubenswrapper[4853]: E1209 18:28:00.569907 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:28:00 crc kubenswrapper[4853]: I1209 18:28:00.744348 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dc8bc986-e8a8-467c-8e2a-795c26a74de7/mysql-bootstrap/0.log" Dec 09 18:28:00 crc kubenswrapper[4853]: I1209 18:28:00.765475 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dc8bc986-e8a8-467c-8e2a-795c26a74de7/galera/0.log" Dec 09 18:28:00 crc kubenswrapper[4853]: I1209 18:28:00.924226 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_24e96bca-760e-4742-823e-5cb3dc9d752e/openstackclient/0.log" Dec 09 18:28:00 crc kubenswrapper[4853]: I1209 18:28:00.967740 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-k9tc2_e9047f51-9852-47e3-bc10-649c8d638054/ovn-controller/0.log" Dec 09 18:28:01 crc kubenswrapper[4853]: I1209 18:28:01.228017 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6bchm_0b54962b-4869-4264-9a51-95ccdb7f3cbf/openstack-network-exporter/0.log" Dec 09 18:28:01 crc kubenswrapper[4853]: I1209 18:28:01.362103 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-twpcm_b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568/ovsdb-server-init/0.log" Dec 09 18:28:01 crc kubenswrapper[4853]: I1209 18:28:01.571842 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-twpcm_b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568/ovsdb-server-init/0.log" Dec 09 18:28:01 crc kubenswrapper[4853]: I1209 18:28:01.596800 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-twpcm_b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568/ovs-vswitchd/0.log" Dec 09 18:28:01 crc kubenswrapper[4853]: I1209 18:28:01.634190 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-twpcm_b92ea732-3ad4-4f9a-bb6b-dd5aa7aad568/ovsdb-server/0.log" Dec 09 18:28:01 crc kubenswrapper[4853]: I1209 18:28:01.707934 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2c3566a2-2617-4b9d-951e-a124a027c307/nova-metadata-metadata/0.log" Dec 09 18:28:01 crc kubenswrapper[4853]: I1209 18:28:01.810143 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_10737df5-395b-499d-a49d-daac220f432c/openstack-network-exporter/0.log" Dec 09 18:28:01 crc kubenswrapper[4853]: I1209 18:28:01.852638 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_10737df5-395b-499d-a49d-daac220f432c/ovn-northd/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.033726 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4084ccc4-89b3-4be7-a0aa-83f3619a0cb1/openstack-network-exporter/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.093424 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4084ccc4-89b3-4be7-a0aa-83f3619a0cb1/ovsdbserver-nb/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.240120 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a0106126-8691-4275-82e8-a74d76c6482c/openstack-network-exporter/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.350122 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a0106126-8691-4275-82e8-a74d76c6482c/ovsdbserver-sb/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.517009 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5496596656-4sjvd_9571bd10-147c-4016-af2c-0dc4df16ae63/placement-api/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.589824 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c5edba71-6b69-4f76-9dde-ed6c7a7ecb71/init-config-reloader/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.592087 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5496596656-4sjvd_9571bd10-147c-4016-af2c-0dc4df16ae63/placement-log/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.772882 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c5edba71-6b69-4f76-9dde-ed6c7a7ecb71/init-config-reloader/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.783564 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c5edba71-6b69-4f76-9dde-ed6c7a7ecb71/config-reloader/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.783792 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c5edba71-6b69-4f76-9dde-ed6c7a7ecb71/thanos-sidecar/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.799673 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c5edba71-6b69-4f76-9dde-ed6c7a7ecb71/prometheus/0.log" Dec 09 18:28:02 crc kubenswrapper[4853]: I1209 18:28:02.995068 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fe91677e-e106-4624-a45e-45111c868559/setup-container/0.log" Dec 09 18:28:03 crc kubenswrapper[4853]: I1209 18:28:03.149921 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fe91677e-e106-4624-a45e-45111c868559/setup-container/0.log" Dec 09 18:28:03 crc kubenswrapper[4853]: I1209 18:28:03.206938 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fe91677e-e106-4624-a45e-45111c868559/rabbitmq/0.log" Dec 09 18:28:03 crc kubenswrapper[4853]: I1209 18:28:03.287954 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2ce495e5-4db9-457d-a5c9-eb39308cbcd2/setup-container/0.log" Dec 09 18:28:03 crc kubenswrapper[4853]: I1209 18:28:03.600901 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2ce495e5-4db9-457d-a5c9-eb39308cbcd2/setup-container/0.log" Dec 09 18:28:03 crc kubenswrapper[4853]: I1209 18:28:03.603634 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2ce495e5-4db9-457d-a5c9-eb39308cbcd2/rabbitmq/0.log" Dec 09 18:28:03 crc kubenswrapper[4853]: I1209 18:28:03.739712 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-crt7p_da7f9c0a-cae9-4fcc-90d8-893f9ca97fe9/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:28:03 crc kubenswrapper[4853]: I1209 18:28:03.948154 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8wzgg_c8f54559-8d6b-42e6-b5f3-db4c8b6ed433/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 18:28:04 crc kubenswrapper[4853]: I1209 18:28:04.268048 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-756c8b85d7-nmj2j_05b3787e-b650-484a-8fa1-5371b8e96c0e/proxy-server/0.log" Dec 09 18:28:04 crc kubenswrapper[4853]: I1209 18:28:04.349915 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-v2tq9_93e3401b-eae8-4c50-a73b-686525de14a2/swift-ring-rebalance/0.log" Dec 09 18:28:04 crc kubenswrapper[4853]: I1209 18:28:04.431789 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-756c8b85d7-nmj2j_05b3787e-b650-484a-8fa1-5371b8e96c0e/proxy-httpd/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.178679 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/account-reaper/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.190431 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/account-auditor/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.213838 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/account-replicator/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.408784 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/account-server/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.437317 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/container-auditor/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.487994 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/container-replicator/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.510927 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/container-server/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.668552 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/container-updater/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.690748 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/object-expirer/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.761454 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/object-auditor/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.789309 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/object-replicator/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.907793 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/object-updater/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.917486 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/object-server/0.log" Dec 09 18:28:05 crc kubenswrapper[4853]: I1209 18:28:05.987244 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/rsync/0.log" Dec 09 18:28:06 crc kubenswrapper[4853]: I1209 18:28:06.043160 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2f6e868f-f4bc-42d3-bbe6-2a391e2b768d/swift-recon-cron/0.log" Dec 09 18:28:09 crc kubenswrapper[4853]: E1209 18:28:09.569648 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:28:12 crc kubenswrapper[4853]: I1209 18:28:12.619024 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8c1f0f91-fd80-4a19-9561-119c381afc9c/memcached/0.log" Dec 09 18:28:14 crc kubenswrapper[4853]: E1209 18:28:14.569764 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:28:21 crc kubenswrapper[4853]: E1209 18:28:21.569679 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:28:27 crc kubenswrapper[4853]: E1209 18:28:27.571459 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:28:34 crc kubenswrapper[4853]: E1209 18:28:34.570856 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:28:38 crc kubenswrapper[4853]: I1209 18:28:38.508135 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl_7fa53334-e1e4-4682-931a-889de208185b/util/0.log" Dec 09 18:28:38 crc kubenswrapper[4853]: I1209 18:28:38.621403 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl_7fa53334-e1e4-4682-931a-889de208185b/pull/0.log" Dec 09 18:28:38 crc kubenswrapper[4853]: I1209 18:28:38.623624 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl_7fa53334-e1e4-4682-931a-889de208185b/pull/0.log" Dec 09 18:28:38 crc kubenswrapper[4853]: I1209 18:28:38.652949 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl_7fa53334-e1e4-4682-931a-889de208185b/util/0.log" Dec 09 18:28:38 crc kubenswrapper[4853]: I1209 18:28:38.840169 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl_7fa53334-e1e4-4682-931a-889de208185b/pull/0.log" Dec 09 18:28:38 crc kubenswrapper[4853]: I1209 18:28:38.845303 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl_7fa53334-e1e4-4682-931a-889de208185b/extract/0.log" Dec 09 18:28:38 crc kubenswrapper[4853]: I1209 18:28:38.846433 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_84f5b8ce9b737d136e9d44026726099c3c124bf2a1b3be498eb888ce47gwmrl_7fa53334-e1e4-4682-931a-889de208185b/util/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.059988 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-bj2bf_29813971-d50f-4186-88c8-380d54284514/kube-rbac-proxy/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.105853 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-xzhzt_967443dd-77e8-4090-a90e-c7e5f2152acb/kube-rbac-proxy/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.131616 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-bj2bf_29813971-d50f-4186-88c8-380d54284514/manager/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.270813 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-xzhzt_967443dd-77e8-4090-a90e-c7e5f2152acb/manager/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.313984 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-cxf5s_565b5b04-34ef-414f-8316-0b6ea0f7835e/kube-rbac-proxy/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.329242 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-cxf5s_565b5b04-34ef-414f-8316-0b6ea0f7835e/manager/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.481726 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-6j847_fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf/kube-rbac-proxy/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.616098 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-6j847_fcd1cd96-4a49-4b3d-94f1-df1bae0cf3bf/manager/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.672697 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-f5fm2_716e46e2-2382-4568-956b-eb55e54cbc92/kube-rbac-proxy/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.806153 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-f5fm2_716e46e2-2382-4568-956b-eb55e54cbc92/manager/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.815091 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-qvpfh_27577524-15bd-403c-9a4f-a693e212b9d3/kube-rbac-proxy/0.log" Dec 09 18:28:39 crc kubenswrapper[4853]: I1209 18:28:39.888156 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-qvpfh_27577524-15bd-403c-9a4f-a693e212b9d3/manager/0.log" Dec 09 18:28:40 crc kubenswrapper[4853]: I1209 18:28:40.026511 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lf994_a4ed8e4a-54de-45d2-962c-7fdbfd49b302/kube-rbac-proxy/0.log" Dec 09 18:28:40 crc kubenswrapper[4853]: I1209 18:28:40.322398 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-lf994_a4ed8e4a-54de-45d2-962c-7fdbfd49b302/manager/0.log" Dec 09 18:28:40 crc kubenswrapper[4853]: E1209 18:28:40.570321 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:28:40 crc kubenswrapper[4853]: I1209 18:28:40.888957 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-h4nnl_3861e360-3725-49dd-9201-9efb6bcaf978/kube-rbac-proxy/0.log" Dec 09 18:28:40 crc kubenswrapper[4853]: I1209 18:28:40.898686 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-vhm5h_4f498f4a-152d-4c28-85b6-71fdeb32d148/kube-rbac-proxy/0.log" Dec 09 18:28:40 crc kubenswrapper[4853]: I1209 18:28:40.906794 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-h4nnl_3861e360-3725-49dd-9201-9efb6bcaf978/manager/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.091796 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-qtnbj_808c6195-03f9-4f53-8cc0-8a70dc0d9588/kube-rbac-proxy/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.141042 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-vhm5h_4f498f4a-152d-4c28-85b6-71fdeb32d148/manager/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.167888 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-qtnbj_808c6195-03f9-4f53-8cc0-8a70dc0d9588/manager/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.306043 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-6c956_da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a/kube-rbac-proxy/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.366361 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-6c956_da740ac1-7ee6-4b46-8b9b-a5fd21df7c4a/manager/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.542188 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jq42w_317f5d16-66f6-42fb-b6b7-01ad51915f20/kube-rbac-proxy/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.552679 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jq42w_317f5d16-66f6-42fb-b6b7-01ad51915f20/manager/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.627209 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-kgtxf_8c543f97-25ca-48a7-8b42-120884dee80b/kube-rbac-proxy/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.790356 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-77blt_7f4def87-3330-40cc-863c-f6bfe07e9c2d/kube-rbac-proxy/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.805638 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-kgtxf_8c543f97-25ca-48a7-8b42-120884dee80b/manager/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.850573 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-77blt_7f4def87-3330-40cc-863c-f6bfe07e9c2d/manager/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.985249 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fcmbqh_d2aeb9ff-da65-4fc1-8362-29c263f9f4c3/kube-rbac-proxy/0.log" Dec 09 18:28:41 crc kubenswrapper[4853]: I1209 18:28:41.995567 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fcmbqh_d2aeb9ff-da65-4fc1-8362-29c263f9f4c3/manager/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.216821 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-pvfz9_39bb0b5d-d4ed-40d2-9c00-226b17a9ef09/registry-server/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.431547 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-cgkl7_e9f17b0a-4f03-460b-b0e9-743882aa435e/kube-rbac-proxy/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.458392 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5f5557f974-498cv_dec17fb3-8dd0-4c56-b058-9d0ab2fae769/operator/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.567980 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-cgkl7_e9f17b0a-4f03-460b-b0e9-743882aa435e/manager/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.672846 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-nfkqh_0a7fce2f-b4b3-4f6c-b417-aa159e161722/kube-rbac-proxy/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.744711 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-nfkqh_0a7fce2f-b4b3-4f6c-b417-aa159e161722/manager/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.747459 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-866b78c4d6-vrq2f_f395880e-faf4-4550-aac9-9cef954c967a/manager/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.921891 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-sv987_6869667f-ac77-482d-b8c1-7ee9d7525c59/operator/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.943088 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-9bq2r_eff79899-1ea3-418a-86fc-f988303b6da5/kube-rbac-proxy/0.log" Dec 09 18:28:43 crc kubenswrapper[4853]: I1209 18:28:43.972638 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-9bq2r_eff79899-1ea3-418a-86fc-f988303b6da5/manager/0.log" Dec 09 18:28:44 crc kubenswrapper[4853]: I1209 18:28:44.102020 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-796785f986-g89lr_234b75e2-2793-4ec3-ab45-c3603ae69436/kube-rbac-proxy/0.log" Dec 09 18:28:44 crc kubenswrapper[4853]: I1209 18:28:44.215888 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-s7rrn_85276cf5-13f2-4890-9d08-07f5e01dc90c/kube-rbac-proxy/0.log" Dec 09 18:28:44 crc kubenswrapper[4853]: I1209 18:28:44.243689 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-s7rrn_85276cf5-13f2-4890-9d08-07f5e01dc90c/manager/0.log" Dec 09 18:28:44 crc kubenswrapper[4853]: I1209 18:28:44.339490 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-qxpbb_8796eb82-e5f1-4ee0-90de-ee42e6010e0d/kube-rbac-proxy/0.log" Dec 09 18:28:44 crc kubenswrapper[4853]: I1209 18:28:44.412688 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-qxpbb_8796eb82-e5f1-4ee0-90de-ee42e6010e0d/manager/0.log" Dec 09 18:28:44 crc kubenswrapper[4853]: I1209 18:28:44.521672 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-796785f986-g89lr_234b75e2-2793-4ec3-ab45-c3603ae69436/manager/0.log" Dec 09 18:28:49 crc kubenswrapper[4853]: E1209 18:28:49.571891 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:28:53 crc kubenswrapper[4853]: E1209 18:28:53.584076 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:28:58 crc kubenswrapper[4853]: I1209 18:28:58.593381 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:28:58 crc kubenswrapper[4853]: I1209 18:28:58.594083 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:29:01 crc kubenswrapper[4853]: E1209 18:29:01.573290 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:29:04 crc kubenswrapper[4853]: E1209 18:29:04.569942 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:29:05 crc kubenswrapper[4853]: I1209 18:29:05.109063 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8qdc9_044a407a-76c8-49bc-8d24-040de15c1b88/control-plane-machine-set-operator/0.log" Dec 09 18:29:05 crc kubenswrapper[4853]: I1209 18:29:05.273176 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zgd7r_a93d1b37-5404-44ed-83de-51eb04c1b2c4/kube-rbac-proxy/0.log" Dec 09 18:29:05 crc kubenswrapper[4853]: I1209 18:29:05.390119 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zgd7r_a93d1b37-5404-44ed-83de-51eb04c1b2c4/machine-api-operator/0.log" Dec 09 18:29:14 crc kubenswrapper[4853]: E1209 18:29:14.570206 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:29:18 crc kubenswrapper[4853]: E1209 18:29:18.570319 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:29:19 crc kubenswrapper[4853]: I1209 18:29:19.401903 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-fdjh9_f37279b1-e5b2-4c60-a604-70931c3f028d/cert-manager-controller/0.log" Dec 09 18:29:19 crc kubenswrapper[4853]: I1209 18:29:19.513114 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-f4lf9_9548e933-9e30-4c00-9713-84238a3a557d/cert-manager-cainjector/0.log" Dec 09 18:29:19 crc kubenswrapper[4853]: I1209 18:29:19.603991 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-b8xr7_c7f7bef8-1add-4d1a-b159-3651264fc6de/cert-manager-webhook/0.log" Dec 09 18:29:26 crc kubenswrapper[4853]: E1209 18:29:26.572172 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:29:28 crc kubenswrapper[4853]: I1209 18:29:28.592525 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:29:28 crc kubenswrapper[4853]: I1209 18:29:28.592958 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:29:30 crc kubenswrapper[4853]: E1209 18:29:30.605673 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:29:34 crc kubenswrapper[4853]: I1209 18:29:34.587643 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-642fz_648c71b4-40ba-4038-8779-b4971316abda/nmstate-console-plugin/0.log" Dec 09 18:29:34 crc kubenswrapper[4853]: I1209 18:29:34.801504 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bb6s6_449b3cb4-bb3a-480c-8a62-6b36c111037e/nmstate-handler/0.log" Dec 09 18:29:34 crc kubenswrapper[4853]: I1209 18:29:34.846375 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-fkxp8_fb495110-ff30-42c3-89ef-ddcee729c1bf/kube-rbac-proxy/0.log" Dec 09 18:29:34 crc kubenswrapper[4853]: I1209 18:29:34.908745 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-fkxp8_fb495110-ff30-42c3-89ef-ddcee729c1bf/nmstate-metrics/0.log" Dec 09 18:29:35 crc kubenswrapper[4853]: I1209 18:29:35.037490 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-6br2m_b2e89fb2-6e5f-4074-898b-fe3cca63994d/nmstate-operator/0.log" Dec 09 18:29:35 crc kubenswrapper[4853]: I1209 18:29:35.127458 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-rphkb_535a4202-4d27-4167-9297-c2309fe99da9/nmstate-webhook/0.log" Dec 09 18:29:40 crc kubenswrapper[4853]: E1209 18:29:40.571328 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:29:42 crc kubenswrapper[4853]: E1209 18:29:42.570245 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:29:51 crc kubenswrapper[4853]: I1209 18:29:51.337189 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-67997bf5ff-6pn4l_39cc3872-b69b-4be6-8e95-bfa0fa931045/kube-rbac-proxy/0.log" Dec 09 18:29:51 crc kubenswrapper[4853]: I1209 18:29:51.676355 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-67997bf5ff-6pn4l_39cc3872-b69b-4be6-8e95-bfa0fa931045/manager/0.log" Dec 09 18:29:53 crc kubenswrapper[4853]: E1209 18:29:53.576565 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:29:54 crc kubenswrapper[4853]: E1209 18:29:54.569552 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:29:58 crc kubenswrapper[4853]: I1209 18:29:58.593291 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:29:58 crc kubenswrapper[4853]: I1209 18:29:58.593973 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:29:58 crc kubenswrapper[4853]: I1209 18:29:58.594030 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 18:29:58 crc kubenswrapper[4853]: I1209 18:29:58.595248 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"405135359b7635f5e67016d3c1cff75d7b75670dbb5c667e67e4591d6717b5c7"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 18:29:58 crc kubenswrapper[4853]: I1209 18:29:58.595328 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://405135359b7635f5e67016d3c1cff75d7b75670dbb5c667e67e4591d6717b5c7" gracePeriod=600 Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.115469 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="405135359b7635f5e67016d3c1cff75d7b75670dbb5c667e67e4591d6717b5c7" exitCode=0 Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.116001 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"405135359b7635f5e67016d3c1cff75d7b75670dbb5c667e67e4591d6717b5c7"} Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.116027 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b"} Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.116077 4853 scope.go:117] "RemoveContainer" containerID="66586ef4c3a7c8be66e0cf1548ba9a07f2c93a5f1421ddd829686d3f6f44e404" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.179573 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q"] Dec 09 18:30:00 crc kubenswrapper[4853]: E1209 18:30:00.180382 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerName="extract-content" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.180399 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerName="extract-content" Dec 09 18:30:00 crc kubenswrapper[4853]: E1209 18:30:00.180422 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerName="extract-utilities" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.180429 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerName="extract-utilities" Dec 09 18:30:00 crc kubenswrapper[4853]: E1209 18:30:00.180454 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerName="registry-server" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.180461 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerName="registry-server" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.180705 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="50310369-ae81-4efd-bf04-39deb9c9a3cc" containerName="registry-server" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.181715 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.184946 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.185190 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.210476 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q"] Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.233456 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a202cde7-4f19-4ead-91e7-9e6631706df0-secret-volume\") pod \"collect-profiles-29421750-vdr5q\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.233575 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a202cde7-4f19-4ead-91e7-9e6631706df0-config-volume\") pod \"collect-profiles-29421750-vdr5q\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.233669 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lgf\" (UniqueName: \"kubernetes.io/projected/a202cde7-4f19-4ead-91e7-9e6631706df0-kube-api-access-k4lgf\") pod \"collect-profiles-29421750-vdr5q\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.335963 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a202cde7-4f19-4ead-91e7-9e6631706df0-secret-volume\") pod \"collect-profiles-29421750-vdr5q\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.336083 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a202cde7-4f19-4ead-91e7-9e6631706df0-config-volume\") pod \"collect-profiles-29421750-vdr5q\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.336165 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4lgf\" (UniqueName: \"kubernetes.io/projected/a202cde7-4f19-4ead-91e7-9e6631706df0-kube-api-access-k4lgf\") pod \"collect-profiles-29421750-vdr5q\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.337048 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a202cde7-4f19-4ead-91e7-9e6631706df0-config-volume\") pod \"collect-profiles-29421750-vdr5q\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.342404 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a202cde7-4f19-4ead-91e7-9e6631706df0-secret-volume\") pod \"collect-profiles-29421750-vdr5q\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.358310 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4lgf\" (UniqueName: \"kubernetes.io/projected/a202cde7-4f19-4ead-91e7-9e6631706df0-kube-api-access-k4lgf\") pod \"collect-profiles-29421750-vdr5q\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.504493 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:00 crc kubenswrapper[4853]: I1209 18:30:00.987871 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q"] Dec 09 18:30:02 crc kubenswrapper[4853]: I1209 18:30:02.139794 4853 generic.go:334] "Generic (PLEG): container finished" podID="a202cde7-4f19-4ead-91e7-9e6631706df0" containerID="f95147fa9e238662bd8531f0eb85fe53efda07be77a40ed2f248bd1d345da1d5" exitCode=0 Dec 09 18:30:02 crc kubenswrapper[4853]: I1209 18:30:02.139845 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" event={"ID":"a202cde7-4f19-4ead-91e7-9e6631706df0","Type":"ContainerDied","Data":"f95147fa9e238662bd8531f0eb85fe53efda07be77a40ed2f248bd1d345da1d5"} Dec 09 18:30:02 crc kubenswrapper[4853]: I1209 18:30:02.140247 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" event={"ID":"a202cde7-4f19-4ead-91e7-9e6631706df0","Type":"ContainerStarted","Data":"11eecd0f661d8616b42d8bc60dc5ef90fb62d39b58236ea0d36e6a20bab568c0"} Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.531564 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.617083 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4lgf\" (UniqueName: \"kubernetes.io/projected/a202cde7-4f19-4ead-91e7-9e6631706df0-kube-api-access-k4lgf\") pod \"a202cde7-4f19-4ead-91e7-9e6631706df0\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.617182 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a202cde7-4f19-4ead-91e7-9e6631706df0-secret-volume\") pod \"a202cde7-4f19-4ead-91e7-9e6631706df0\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.617223 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a202cde7-4f19-4ead-91e7-9e6631706df0-config-volume\") pod \"a202cde7-4f19-4ead-91e7-9e6631706df0\" (UID: \"a202cde7-4f19-4ead-91e7-9e6631706df0\") " Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.622760 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a202cde7-4f19-4ead-91e7-9e6631706df0-config-volume" (OuterVolumeSpecName: "config-volume") pod "a202cde7-4f19-4ead-91e7-9e6631706df0" (UID: "a202cde7-4f19-4ead-91e7-9e6631706df0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.625961 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a202cde7-4f19-4ead-91e7-9e6631706df0-kube-api-access-k4lgf" (OuterVolumeSpecName: "kube-api-access-k4lgf") pod "a202cde7-4f19-4ead-91e7-9e6631706df0" (UID: "a202cde7-4f19-4ead-91e7-9e6631706df0"). InnerVolumeSpecName "kube-api-access-k4lgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.626870 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a202cde7-4f19-4ead-91e7-9e6631706df0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a202cde7-4f19-4ead-91e7-9e6631706df0" (UID: "a202cde7-4f19-4ead-91e7-9e6631706df0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.720675 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4lgf\" (UniqueName: \"kubernetes.io/projected/a202cde7-4f19-4ead-91e7-9e6631706df0-kube-api-access-k4lgf\") on node \"crc\" DevicePath \"\"" Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.720942 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a202cde7-4f19-4ead-91e7-9e6631706df0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 18:30:03 crc kubenswrapper[4853]: I1209 18:30:03.721000 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a202cde7-4f19-4ead-91e7-9e6631706df0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 18:30:04 crc kubenswrapper[4853]: I1209 18:30:04.160464 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" event={"ID":"a202cde7-4f19-4ead-91e7-9e6631706df0","Type":"ContainerDied","Data":"11eecd0f661d8616b42d8bc60dc5ef90fb62d39b58236ea0d36e6a20bab568c0"} Dec 09 18:30:04 crc kubenswrapper[4853]: I1209 18:30:04.160775 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11eecd0f661d8616b42d8bc60dc5ef90fb62d39b58236ea0d36e6a20bab568c0" Dec 09 18:30:04 crc kubenswrapper[4853]: I1209 18:30:04.160825 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421750-vdr5q" Dec 09 18:30:04 crc kubenswrapper[4853]: I1209 18:30:04.617093 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86"] Dec 09 18:30:04 crc kubenswrapper[4853]: I1209 18:30:04.628240 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421705-dlv86"] Dec 09 18:30:05 crc kubenswrapper[4853]: E1209 18:30:05.569558 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:30:05 crc kubenswrapper[4853]: I1209 18:30:05.619188 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b94aea8-6525-4e97-b3bc-66eea871224b" path="/var/lib/kubelet/pods/3b94aea8-6525-4e97-b3bc-66eea871224b/volumes" Dec 09 18:30:08 crc kubenswrapper[4853]: E1209 18:30:08.570337 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:30:08 crc kubenswrapper[4853]: I1209 18:30:08.611944 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-lbcsr_8bd4f09f-79cb-4e4f-b2c0-aeb79f65cb66/cluster-logging-operator/0.log" Dec 09 18:30:08 crc kubenswrapper[4853]: I1209 18:30:08.781228 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_406b84c5-9e7a-4caa-8d3d-561834086d10/loki-compactor/0.log" Dec 09 18:30:08 crc kubenswrapper[4853]: I1209 18:30:08.828533 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-pm7gl_f3161e6e-aba7-422a-a2a5-f6384a378672/collector/0.log" Dec 09 18:30:08 crc kubenswrapper[4853]: I1209 18:30:08.961811 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-p22sf_95c39f8d-6f5d-4c8e-8505-9cff1c6da497/loki-distributor/0.log" Dec 09 18:30:09 crc kubenswrapper[4853]: I1209 18:30:09.029163 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f5f8575b6-7sclr_eb9aef1a-068b-494d-ba15-f49b97fed99c/gateway/0.log" Dec 09 18:30:09 crc kubenswrapper[4853]: I1209 18:30:09.090854 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f5f8575b6-7sclr_eb9aef1a-068b-494d-ba15-f49b97fed99c/opa/0.log" Dec 09 18:30:09 crc kubenswrapper[4853]: I1209 18:30:09.236796 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f5f8575b6-qw6dq_fd90b911-3db7-49be-8c84-42d05d55e4d3/gateway/0.log" Dec 09 18:30:09 crc kubenswrapper[4853]: I1209 18:30:09.245788 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f5f8575b6-qw6dq_fd90b911-3db7-49be-8c84-42d05d55e4d3/opa/0.log" Dec 09 18:30:09 crc kubenswrapper[4853]: I1209 18:30:09.454551 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_86ebc563-5cec-468c-97fd-b1a7b5e1f4a2/loki-index-gateway/0.log" Dec 09 18:30:09 crc kubenswrapper[4853]: I1209 18:30:09.565664 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_e056741b-93d1-44c3-a16d-b6a04a3e1979/loki-ingester/0.log" Dec 09 18:30:09 crc kubenswrapper[4853]: I1209 18:30:09.631993 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-td5fh_b8a31444-3d60-49f9-b39e-dd8b79cc4195/loki-querier/0.log" Dec 09 18:30:10 crc kubenswrapper[4853]: I1209 18:30:10.362112 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-mv68d_4990ddc1-fd57-44bd-a4e9-a3b63f5f3920/loki-query-frontend/0.log" Dec 09 18:30:17 crc kubenswrapper[4853]: E1209 18:30:17.569356 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:30:20 crc kubenswrapper[4853]: E1209 18:30:20.575880 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:30:26 crc kubenswrapper[4853]: I1209 18:30:26.853584 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-vrl25_3830e853-f59e-47bc-8fce-ef3f9a6e3d24/kube-rbac-proxy/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.045320 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-vrl25_3830e853-f59e-47bc-8fce-ef3f9a6e3d24/controller/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.163847 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-frr-files/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.310414 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-frr-files/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.359910 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-reloader/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.360066 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-metrics/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.377460 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-reloader/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.604136 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-frr-files/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.612919 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-metrics/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.636731 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-metrics/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.649573 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-reloader/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.818416 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-frr-files/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.897519 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-metrics/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.907686 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/cp-reloader/0.log" Dec 09 18:30:27 crc kubenswrapper[4853]: I1209 18:30:27.981103 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/controller/0.log" Dec 09 18:30:28 crc kubenswrapper[4853]: I1209 18:30:28.148171 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/frr-metrics/0.log" Dec 09 18:30:28 crc kubenswrapper[4853]: I1209 18:30:28.185981 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/kube-rbac-proxy/0.log" Dec 09 18:30:28 crc kubenswrapper[4853]: I1209 18:30:28.252861 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/kube-rbac-proxy-frr/0.log" Dec 09 18:30:28 crc kubenswrapper[4853]: I1209 18:30:28.597072 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/reloader/0.log" Dec 09 18:30:28 crc kubenswrapper[4853]: I1209 18:30:28.646217 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-9b5hd_7963b96f-5ec5-4e4d-a505-286baf8fec0a/frr-k8s-webhook-server/0.log" Dec 09 18:30:28 crc kubenswrapper[4853]: I1209 18:30:28.923351 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76f597947c-hpbsl_992aeeb4-58d4-4276-b421-6bb66e4c419d/manager/0.log" Dec 09 18:30:29 crc kubenswrapper[4853]: I1209 18:30:29.027463 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75b46f54d-7z4tq_2974b16e-c9c7-41c5-a8c0-35c5f44cf2f6/webhook-server/0.log" Dec 09 18:30:29 crc kubenswrapper[4853]: I1209 18:30:29.233313 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9f624_d69e1c5d-c55e-4df0-9a28-1b5f9de9136c/kube-rbac-proxy/0.log" Dec 09 18:30:29 crc kubenswrapper[4853]: I1209 18:30:29.617564 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pn5xm_4cfdbc7f-7f33-4e37-97ed-d568fe27219c/frr/0.log" Dec 09 18:30:29 crc kubenswrapper[4853]: I1209 18:30:29.839643 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9f624_d69e1c5d-c55e-4df0-9a28-1b5f9de9136c/speaker/0.log" Dec 09 18:30:30 crc kubenswrapper[4853]: E1209 18:30:30.569985 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:30:31 crc kubenswrapper[4853]: E1209 18:30:31.569529 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:30:31 crc kubenswrapper[4853]: I1209 18:30:31.871871 4853 scope.go:117] "RemoveContainer" containerID="e2fe9913441240ba53f26e8711d349963e1f3ccaceadbc922402220d00c203c4" Dec 09 18:30:44 crc kubenswrapper[4853]: E1209 18:30:44.570462 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:30:46 crc kubenswrapper[4853]: E1209 18:30:46.569312 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:30:46 crc kubenswrapper[4853]: I1209 18:30:46.963579 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv_512ebe87-626f-4880-b7d5-20d61f740a8b/util/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.184787 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv_512ebe87-626f-4880-b7d5-20d61f740a8b/util/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.186618 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv_512ebe87-626f-4880-b7d5-20d61f740a8b/pull/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.205584 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv_512ebe87-626f-4880-b7d5-20d61f740a8b/pull/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.368373 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv_512ebe87-626f-4880-b7d5-20d61f740a8b/pull/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.372155 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv_512ebe87-626f-4880-b7d5-20d61f740a8b/extract/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.387589 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8m5vmv_512ebe87-626f-4880-b7d5-20d61f740a8b/util/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.565044 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv_7a72a4c1-6c6e-4022-ab9b-186a8814affc/util/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.727098 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv_7a72a4c1-6c6e-4022-ab9b-186a8814affc/util/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.767914 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv_7a72a4c1-6c6e-4022-ab9b-186a8814affc/pull/0.log" Dec 09 18:30:47 crc kubenswrapper[4853]: I1209 18:30:47.974944 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv_7a72a4c1-6c6e-4022-ab9b-186a8814affc/pull/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.070091 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv_7a72a4c1-6c6e-4022-ab9b-186a8814affc/util/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.123527 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv_7a72a4c1-6c6e-4022-ab9b-186a8814affc/pull/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.167360 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212frwqsv_7a72a4c1-6c6e-4022-ab9b-186a8814affc/extract/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.326563 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z_174b41a8-0a58-43f6-b6cc-03f8864597e5/util/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.462094 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z_174b41a8-0a58-43f6-b6cc-03f8864597e5/util/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.505913 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z_174b41a8-0a58-43f6-b6cc-03f8864597e5/pull/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.517025 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z_174b41a8-0a58-43f6-b6cc-03f8864597e5/pull/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.729278 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z_174b41a8-0a58-43f6-b6cc-03f8864597e5/extract/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.730003 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z_174b41a8-0a58-43f6-b6cc-03f8864597e5/pull/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.772298 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210f9r8z_174b41a8-0a58-43f6-b6cc-03f8864597e5/util/0.log" Dec 09 18:30:48 crc kubenswrapper[4853]: I1209 18:30:48.902402 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn_f04547bf-9d5c-4408-a107-bbe92020eb73/util/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.095453 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn_f04547bf-9d5c-4408-a107-bbe92020eb73/util/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.145721 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn_f04547bf-9d5c-4408-a107-bbe92020eb73/pull/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.148029 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn_f04547bf-9d5c-4408-a107-bbe92020eb73/pull/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.365767 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn_f04547bf-9d5c-4408-a107-bbe92020eb73/extract/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.398095 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn_f04547bf-9d5c-4408-a107-bbe92020eb73/util/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.404516 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fmqkrn_f04547bf-9d5c-4408-a107-bbe92020eb73/pull/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.684945 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s_3d92b1b6-cc38-4ead-893a-69afbb6e6786/util/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.911183 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s_3d92b1b6-cc38-4ead-893a-69afbb6e6786/util/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.922376 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s_3d92b1b6-cc38-4ead-893a-69afbb6e6786/pull/0.log" Dec 09 18:30:49 crc kubenswrapper[4853]: I1209 18:30:49.941306 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s_3d92b1b6-cc38-4ead-893a-69afbb6e6786/pull/0.log" Dec 09 18:30:50 crc kubenswrapper[4853]: I1209 18:30:50.209078 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s_3d92b1b6-cc38-4ead-893a-69afbb6e6786/util/0.log" Dec 09 18:30:50 crc kubenswrapper[4853]: I1209 18:30:50.216822 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s_3d92b1b6-cc38-4ead-893a-69afbb6e6786/pull/0.log" Dec 09 18:30:50 crc kubenswrapper[4853]: I1209 18:30:50.224179 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83xqf9s_3d92b1b6-cc38-4ead-893a-69afbb6e6786/extract/0.log" Dec 09 18:30:50 crc kubenswrapper[4853]: I1209 18:30:50.415377 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2nsl_83a278e1-13a8-4205-b2b1-74f168c7f2ac/extract-utilities/0.log" Dec 09 18:30:50 crc kubenswrapper[4853]: I1209 18:30:50.609084 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2nsl_83a278e1-13a8-4205-b2b1-74f168c7f2ac/extract-content/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.356079 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2nsl_83a278e1-13a8-4205-b2b1-74f168c7f2ac/extract-utilities/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.357062 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2nsl_83a278e1-13a8-4205-b2b1-74f168c7f2ac/extract-content/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.357451 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2nsl_83a278e1-13a8-4205-b2b1-74f168c7f2ac/extract-content/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.390401 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2nsl_83a278e1-13a8-4205-b2b1-74f168c7f2ac/extract-utilities/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.617724 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhrsr_c26d927d-cf9a-4030-8b51-5a02cc9688ad/extract-utilities/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.679692 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s2nsl_83a278e1-13a8-4205-b2b1-74f168c7f2ac/registry-server/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.765913 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhrsr_c26d927d-cf9a-4030-8b51-5a02cc9688ad/extract-utilities/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.765926 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhrsr_c26d927d-cf9a-4030-8b51-5a02cc9688ad/extract-content/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.821533 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhrsr_c26d927d-cf9a-4030-8b51-5a02cc9688ad/extract-content/0.log" Dec 09 18:30:51 crc kubenswrapper[4853]: I1209 18:30:51.982991 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhrsr_c26d927d-cf9a-4030-8b51-5a02cc9688ad/extract-utilities/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.006255 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhrsr_c26d927d-cf9a-4030-8b51-5a02cc9688ad/extract-content/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.124650 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5kqd6_98a172d2-8ea2-44a0-959d-0b343cceeaec/marketplace-operator/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.307474 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zhrsr_c26d927d-cf9a-4030-8b51-5a02cc9688ad/registry-server/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.563226 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jz7h6_387d3b82-b9bc-4adc-a6f6-c4a9bee3b527/extract-utilities/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.642909 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jz7h6_387d3b82-b9bc-4adc-a6f6-c4a9bee3b527/extract-utilities/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.679774 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jz7h6_387d3b82-b9bc-4adc-a6f6-c4a9bee3b527/extract-content/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.695302 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jz7h6_387d3b82-b9bc-4adc-a6f6-c4a9bee3b527/extract-content/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.907104 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jz7h6_387d3b82-b9bc-4adc-a6f6-c4a9bee3b527/extract-content/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.928548 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jz7h6_387d3b82-b9bc-4adc-a6f6-c4a9bee3b527/extract-utilities/0.log" Dec 09 18:30:52 crc kubenswrapper[4853]: I1209 18:30:52.988808 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t4lrq_bd9a6790-1308-4763-82a8-73c1a4ba6997/extract-utilities/0.log" Dec 09 18:30:53 crc kubenswrapper[4853]: I1209 18:30:53.095860 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jz7h6_387d3b82-b9bc-4adc-a6f6-c4a9bee3b527/registry-server/0.log" Dec 09 18:30:53 crc kubenswrapper[4853]: I1209 18:30:53.167366 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t4lrq_bd9a6790-1308-4763-82a8-73c1a4ba6997/extract-content/0.log" Dec 09 18:30:53 crc kubenswrapper[4853]: I1209 18:30:53.169874 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t4lrq_bd9a6790-1308-4763-82a8-73c1a4ba6997/extract-utilities/0.log" Dec 09 18:30:53 crc kubenswrapper[4853]: I1209 18:30:53.211651 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t4lrq_bd9a6790-1308-4763-82a8-73c1a4ba6997/extract-content/0.log" Dec 09 18:30:53 crc kubenswrapper[4853]: I1209 18:30:53.392086 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t4lrq_bd9a6790-1308-4763-82a8-73c1a4ba6997/extract-content/0.log" Dec 09 18:30:53 crc kubenswrapper[4853]: I1209 18:30:53.435999 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t4lrq_bd9a6790-1308-4763-82a8-73c1a4ba6997/extract-utilities/0.log" Dec 09 18:30:54 crc kubenswrapper[4853]: I1209 18:30:54.259238 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t4lrq_bd9a6790-1308-4763-82a8-73c1a4ba6997/registry-server/0.log" Dec 09 18:30:57 crc kubenswrapper[4853]: E1209 18:30:57.569891 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:30:58 crc kubenswrapper[4853]: E1209 18:30:58.586294 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.704949 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-npxqn"] Dec 09 18:31:01 crc kubenswrapper[4853]: E1209 18:31:01.706816 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a202cde7-4f19-4ead-91e7-9e6631706df0" containerName="collect-profiles" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.706844 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a202cde7-4f19-4ead-91e7-9e6631706df0" containerName="collect-profiles" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.707198 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a202cde7-4f19-4ead-91e7-9e6631706df0" containerName="collect-profiles" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.709118 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.743040 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-npxqn"] Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.829590 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-catalog-content\") pod \"community-operators-npxqn\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.829705 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlhzw\" (UniqueName: \"kubernetes.io/projected/31e651e1-0cb6-4a7a-a644-31ce8484f957-kube-api-access-dlhzw\") pod \"community-operators-npxqn\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.829886 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-utilities\") pod \"community-operators-npxqn\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.931874 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlhzw\" (UniqueName: \"kubernetes.io/projected/31e651e1-0cb6-4a7a-a644-31ce8484f957-kube-api-access-dlhzw\") pod \"community-operators-npxqn\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.932046 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-utilities\") pod \"community-operators-npxqn\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.932142 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-catalog-content\") pod \"community-operators-npxqn\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.932710 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-utilities\") pod \"community-operators-npxqn\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.932718 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-catalog-content\") pod \"community-operators-npxqn\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:01 crc kubenswrapper[4853]: I1209 18:31:01.959671 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlhzw\" (UniqueName: \"kubernetes.io/projected/31e651e1-0cb6-4a7a-a644-31ce8484f957-kube-api-access-dlhzw\") pod \"community-operators-npxqn\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:02 crc kubenswrapper[4853]: I1209 18:31:02.034383 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:02 crc kubenswrapper[4853]: I1209 18:31:02.663421 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-npxqn"] Dec 09 18:31:02 crc kubenswrapper[4853]: I1209 18:31:02.839476 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxqn" event={"ID":"31e651e1-0cb6-4a7a-a644-31ce8484f957","Type":"ContainerStarted","Data":"f46e818178a2f3d933d77474e75e8d41cc5a34fe76cccff9b79be006c8f3052c"} Dec 09 18:31:03 crc kubenswrapper[4853]: I1209 18:31:03.855932 4853 generic.go:334] "Generic (PLEG): container finished" podID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerID="acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41" exitCode=0 Dec 09 18:31:03 crc kubenswrapper[4853]: I1209 18:31:03.856137 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxqn" event={"ID":"31e651e1-0cb6-4a7a-a644-31ce8484f957","Type":"ContainerDied","Data":"acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41"} Dec 09 18:31:05 crc kubenswrapper[4853]: I1209 18:31:05.886759 4853 generic.go:334] "Generic (PLEG): container finished" podID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerID="cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b" exitCode=0 Dec 09 18:31:05 crc kubenswrapper[4853]: I1209 18:31:05.886879 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxqn" event={"ID":"31e651e1-0cb6-4a7a-a644-31ce8484f957","Type":"ContainerDied","Data":"cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b"} Dec 09 18:31:06 crc kubenswrapper[4853]: I1209 18:31:06.901204 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxqn" event={"ID":"31e651e1-0cb6-4a7a-a644-31ce8484f957","Type":"ContainerStarted","Data":"822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78"} Dec 09 18:31:06 crc kubenswrapper[4853]: I1209 18:31:06.941222 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-npxqn" podStartSLOduration=3.184041212 podStartE2EDuration="5.941201768s" podCreationTimestamp="2025-12-09 18:31:01 +0000 UTC" firstStartedPulling="2025-12-09 18:31:03.858575816 +0000 UTC m=+5690.793314988" lastFinishedPulling="2025-12-09 18:31:06.615736372 +0000 UTC m=+5693.550475544" observedRunningTime="2025-12-09 18:31:06.924148147 +0000 UTC m=+5693.858887329" watchObservedRunningTime="2025-12-09 18:31:06.941201768 +0000 UTC m=+5693.875940960" Dec 09 18:31:09 crc kubenswrapper[4853]: E1209 18:31:09.568990 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:31:10 crc kubenswrapper[4853]: I1209 18:31:10.551125 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f57b99958-7x5lw_d97a2cbf-13a7-4985-a22b-aa4cd04d192c/prometheus-operator-admission-webhook/0.log" Dec 09 18:31:10 crc kubenswrapper[4853]: I1209 18:31:10.736180 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-pkncp_04755536-2551-4375-9e7c-1b901b498f8b/prometheus-operator/0.log" Dec 09 18:31:10 crc kubenswrapper[4853]: I1209 18:31:10.737877 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f57b99958-f7mcn_eb455954-ec70-4aa6-bbf1-39354677512b/prometheus-operator-admission-webhook/0.log" Dec 09 18:31:11 crc kubenswrapper[4853]: I1209 18:31:11.140443 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-q4nfc_1b0f2e0b-84ec-4b60-afa2-4f090a35596d/operator/0.log" Dec 09 18:31:11 crc kubenswrapper[4853]: I1209 18:31:11.186569 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-nhns2_d29e5a87-e074-4460-8fcf-d8b519b2c746/observability-ui-dashboards/0.log" Dec 09 18:31:11 crc kubenswrapper[4853]: I1209 18:31:11.377797 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-wclnd_37fdfb11-d235-458b-8963-bf7cb3a9b589/perses-operator/0.log" Dec 09 18:31:11 crc kubenswrapper[4853]: E1209 18:31:11.570850 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:31:12 crc kubenswrapper[4853]: I1209 18:31:12.034650 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:12 crc kubenswrapper[4853]: I1209 18:31:12.034716 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:12 crc kubenswrapper[4853]: I1209 18:31:12.266955 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:13 crc kubenswrapper[4853]: I1209 18:31:13.030484 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:13 crc kubenswrapper[4853]: I1209 18:31:13.089634 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-npxqn"] Dec 09 18:31:14 crc kubenswrapper[4853]: I1209 18:31:14.988449 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-npxqn" podUID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerName="registry-server" containerID="cri-o://822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78" gracePeriod=2 Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.554982 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.574363 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlhzw\" (UniqueName: \"kubernetes.io/projected/31e651e1-0cb6-4a7a-a644-31ce8484f957-kube-api-access-dlhzw\") pod \"31e651e1-0cb6-4a7a-a644-31ce8484f957\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.574434 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-utilities\") pod \"31e651e1-0cb6-4a7a-a644-31ce8484f957\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.574463 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-catalog-content\") pod \"31e651e1-0cb6-4a7a-a644-31ce8484f957\" (UID: \"31e651e1-0cb6-4a7a-a644-31ce8484f957\") " Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.576964 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-utilities" (OuterVolumeSpecName: "utilities") pod "31e651e1-0cb6-4a7a-a644-31ce8484f957" (UID: "31e651e1-0cb6-4a7a-a644-31ce8484f957"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.583209 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e651e1-0cb6-4a7a-a644-31ce8484f957-kube-api-access-dlhzw" (OuterVolumeSpecName: "kube-api-access-dlhzw") pod "31e651e1-0cb6-4a7a-a644-31ce8484f957" (UID: "31e651e1-0cb6-4a7a-a644-31ce8484f957"). InnerVolumeSpecName "kube-api-access-dlhzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.621813 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31e651e1-0cb6-4a7a-a644-31ce8484f957" (UID: "31e651e1-0cb6-4a7a-a644-31ce8484f957"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.677322 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlhzw\" (UniqueName: \"kubernetes.io/projected/31e651e1-0cb6-4a7a-a644-31ce8484f957-kube-api-access-dlhzw\") on node \"crc\" DevicePath \"\"" Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.677349 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:31:15 crc kubenswrapper[4853]: I1209 18:31:15.677358 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e651e1-0cb6-4a7a-a644-31ce8484f957-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.004777 4853 generic.go:334] "Generic (PLEG): container finished" podID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerID="822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78" exitCode=0 Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.004819 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxqn" event={"ID":"31e651e1-0cb6-4a7a-a644-31ce8484f957","Type":"ContainerDied","Data":"822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78"} Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.004846 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxqn" event={"ID":"31e651e1-0cb6-4a7a-a644-31ce8484f957","Type":"ContainerDied","Data":"f46e818178a2f3d933d77474e75e8d41cc5a34fe76cccff9b79be006c8f3052c"} Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.004863 4853 scope.go:117] "RemoveContainer" containerID="822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.007273 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npxqn" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.039587 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-npxqn"] Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.047893 4853 scope.go:117] "RemoveContainer" containerID="cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.050513 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-npxqn"] Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.071486 4853 scope.go:117] "RemoveContainer" containerID="acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.128837 4853 scope.go:117] "RemoveContainer" containerID="822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78" Dec 09 18:31:16 crc kubenswrapper[4853]: E1209 18:31:16.129590 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78\": container with ID starting with 822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78 not found: ID does not exist" containerID="822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.129642 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78"} err="failed to get container status \"822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78\": rpc error: code = NotFound desc = could not find container \"822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78\": container with ID starting with 822bb48e0ac523166a7eb0f90612f718eff6cea31e1caf78bbf47b60442a3e78 not found: ID does not exist" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.129664 4853 scope.go:117] "RemoveContainer" containerID="cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b" Dec 09 18:31:16 crc kubenswrapper[4853]: E1209 18:31:16.129924 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b\": container with ID starting with cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b not found: ID does not exist" containerID="cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.129962 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b"} err="failed to get container status \"cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b\": rpc error: code = NotFound desc = could not find container \"cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b\": container with ID starting with cbbac1f68b003d6a3070a310867a99a55cf51053ece272628c297317cd2e058b not found: ID does not exist" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.129991 4853 scope.go:117] "RemoveContainer" containerID="acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41" Dec 09 18:31:16 crc kubenswrapper[4853]: E1209 18:31:16.130284 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41\": container with ID starting with acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41 not found: ID does not exist" containerID="acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41" Dec 09 18:31:16 crc kubenswrapper[4853]: I1209 18:31:16.130330 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41"} err="failed to get container status \"acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41\": rpc error: code = NotFound desc = could not find container \"acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41\": container with ID starting with acff5f7bd25549d1c91e9becac4ca0d7335c5002c07173825e2db922d4ce1e41 not found: ID does not exist" Dec 09 18:31:17 crc kubenswrapper[4853]: I1209 18:31:17.581213 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e651e1-0cb6-4a7a-a644-31ce8484f957" path="/var/lib/kubelet/pods/31e651e1-0cb6-4a7a-a644-31ce8484f957/volumes" Dec 09 18:31:24 crc kubenswrapper[4853]: E1209 18:31:24.569299 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:31:24 crc kubenswrapper[4853]: E1209 18:31:24.569348 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:31:26 crc kubenswrapper[4853]: I1209 18:31:26.965024 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-67997bf5ff-6pn4l_39cc3872-b69b-4be6-8e95-bfa0fa931045/kube-rbac-proxy/0.log" Dec 09 18:31:26 crc kubenswrapper[4853]: I1209 18:31:26.979135 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-67997bf5ff-6pn4l_39cc3872-b69b-4be6-8e95-bfa0fa931045/manager/0.log" Dec 09 18:31:38 crc kubenswrapper[4853]: E1209 18:31:38.571557 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:31:39 crc kubenswrapper[4853]: E1209 18:31:39.573695 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:31:48 crc kubenswrapper[4853]: E1209 18:31:48.247872 4853 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:49588->38.102.83.36:39411: write tcp 38.102.83.36:49588->38.102.83.36:39411: write: broken pipe Dec 09 18:31:51 crc kubenswrapper[4853]: E1209 18:31:51.569705 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:31:53 crc kubenswrapper[4853]: E1209 18:31:53.578670 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:32:03 crc kubenswrapper[4853]: E1209 18:32:03.584134 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:32:04 crc kubenswrapper[4853]: I1209 18:32:04.570714 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 18:32:04 crc kubenswrapper[4853]: E1209 18:32:04.699407 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:32:04 crc kubenswrapper[4853]: E1209 18:32:04.699493 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:32:04 crc kubenswrapper[4853]: E1209 18:32:04.699698 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:32:04 crc kubenswrapper[4853]: E1209 18:32:04.700965 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:32:17 crc kubenswrapper[4853]: E1209 18:32:17.717185 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:32:17 crc kubenswrapper[4853]: E1209 18:32:17.717694 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:32:17 crc kubenswrapper[4853]: E1209 18:32:17.717819 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:32:17 crc kubenswrapper[4853]: E1209 18:32:17.718997 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:32:18 crc kubenswrapper[4853]: E1209 18:32:18.568330 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:32:28 crc kubenswrapper[4853]: I1209 18:32:28.593456 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:32:28 crc kubenswrapper[4853]: I1209 18:32:28.594001 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:32:29 crc kubenswrapper[4853]: E1209 18:32:29.569404 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:32:32 crc kubenswrapper[4853]: E1209 18:32:32.570174 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:32:43 crc kubenswrapper[4853]: E1209 18:32:43.584617 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:32:44 crc kubenswrapper[4853]: E1209 18:32:44.571062 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:32:56 crc kubenswrapper[4853]: E1209 18:32:56.570109 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:32:57 crc kubenswrapper[4853]: E1209 18:32:57.569388 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:32:58 crc kubenswrapper[4853]: I1209 18:32:58.592961 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:32:58 crc kubenswrapper[4853]: I1209 18:32:58.593423 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.642700 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t65zx"] Dec 09 18:32:59 crc kubenswrapper[4853]: E1209 18:32:59.676292 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerName="extract-content" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.676325 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerName="extract-content" Dec 09 18:32:59 crc kubenswrapper[4853]: E1209 18:32:59.676339 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerName="registry-server" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.676348 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerName="registry-server" Dec 09 18:32:59 crc kubenswrapper[4853]: E1209 18:32:59.676373 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerName="extract-utilities" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.676384 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerName="extract-utilities" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.676926 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e651e1-0cb6-4a7a-a644-31ce8484f957" containerName="registry-server" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.682792 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.707794 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t65zx"] Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.885050 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-utilities\") pod \"certified-operators-t65zx\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.885123 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-catalog-content\") pod \"certified-operators-t65zx\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.885308 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvkf\" (UniqueName: \"kubernetes.io/projected/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-kube-api-access-9nvkf\") pod \"certified-operators-t65zx\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.987943 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvkf\" (UniqueName: \"kubernetes.io/projected/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-kube-api-access-9nvkf\") pod \"certified-operators-t65zx\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.988163 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-utilities\") pod \"certified-operators-t65zx\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.988197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-catalog-content\") pod \"certified-operators-t65zx\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.988651 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-utilities\") pod \"certified-operators-t65zx\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:32:59 crc kubenswrapper[4853]: I1209 18:32:59.988719 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-catalog-content\") pod \"certified-operators-t65zx\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:33:00 crc kubenswrapper[4853]: I1209 18:33:00.013479 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvkf\" (UniqueName: \"kubernetes.io/projected/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-kube-api-access-9nvkf\") pod \"certified-operators-t65zx\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:33:00 crc kubenswrapper[4853]: I1209 18:33:00.025865 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:33:00 crc kubenswrapper[4853]: I1209 18:33:00.576754 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t65zx"] Dec 09 18:33:01 crc kubenswrapper[4853]: I1209 18:33:01.359848 4853 generic.go:334] "Generic (PLEG): container finished" podID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerID="1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c" exitCode=0 Dec 09 18:33:01 crc kubenswrapper[4853]: I1209 18:33:01.359934 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t65zx" event={"ID":"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6","Type":"ContainerDied","Data":"1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c"} Dec 09 18:33:01 crc kubenswrapper[4853]: I1209 18:33:01.361667 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t65zx" event={"ID":"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6","Type":"ContainerStarted","Data":"219828dbeb2ad81b0c081123c43d10ffe3193f90a7a6849ac7c9cc8c500268eb"} Dec 09 18:33:02 crc kubenswrapper[4853]: I1209 18:33:02.378002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t65zx" event={"ID":"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6","Type":"ContainerStarted","Data":"38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c"} Dec 09 18:33:03 crc kubenswrapper[4853]: I1209 18:33:03.394081 4853 generic.go:334] "Generic (PLEG): container finished" podID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerID="38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c" exitCode=0 Dec 09 18:33:03 crc kubenswrapper[4853]: I1209 18:33:03.394385 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t65zx" event={"ID":"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6","Type":"ContainerDied","Data":"38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c"} Dec 09 18:33:03 crc kubenswrapper[4853]: I1209 18:33:03.832797 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cxhhx"] Dec 09 18:33:03 crc kubenswrapper[4853]: I1209 18:33:03.836257 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:03 crc kubenswrapper[4853]: I1209 18:33:03.847003 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxhhx"] Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.003570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-utilities\") pod \"redhat-marketplace-cxhhx\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.004006 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfwk\" (UniqueName: \"kubernetes.io/projected/fad98d15-2a08-4417-828e-c8529374012b-kube-api-access-tnfwk\") pod \"redhat-marketplace-cxhhx\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.004198 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-catalog-content\") pod \"redhat-marketplace-cxhhx\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.106705 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-catalog-content\") pod \"redhat-marketplace-cxhhx\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.106775 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-utilities\") pod \"redhat-marketplace-cxhhx\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.106918 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfwk\" (UniqueName: \"kubernetes.io/projected/fad98d15-2a08-4417-828e-c8529374012b-kube-api-access-tnfwk\") pod \"redhat-marketplace-cxhhx\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.107263 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-catalog-content\") pod \"redhat-marketplace-cxhhx\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.107465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-utilities\") pod \"redhat-marketplace-cxhhx\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.146903 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnfwk\" (UniqueName: \"kubernetes.io/projected/fad98d15-2a08-4417-828e-c8529374012b-kube-api-access-tnfwk\") pod \"redhat-marketplace-cxhhx\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.199238 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:04 crc kubenswrapper[4853]: I1209 18:33:04.951585 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxhhx"] Dec 09 18:33:04 crc kubenswrapper[4853]: W1209 18:33:04.952722 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfad98d15_2a08_4417_828e_c8529374012b.slice/crio-4756be376abdeed9564a8390d16a937a8b6ca062ac5ee53a092a2c85993bd629 WatchSource:0}: Error finding container 4756be376abdeed9564a8390d16a937a8b6ca062ac5ee53a092a2c85993bd629: Status 404 returned error can't find the container with id 4756be376abdeed9564a8390d16a937a8b6ca062ac5ee53a092a2c85993bd629 Dec 09 18:33:05 crc kubenswrapper[4853]: I1209 18:33:05.416819 4853 generic.go:334] "Generic (PLEG): container finished" podID="23ab326c-f916-4b00-af22-bf5bdfdbc052" containerID="11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a" exitCode=0 Dec 09 18:33:05 crc kubenswrapper[4853]: I1209 18:33:05.417043 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k9m2p/must-gather-qnmln" event={"ID":"23ab326c-f916-4b00-af22-bf5bdfdbc052","Type":"ContainerDied","Data":"11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a"} Dec 09 18:33:05 crc kubenswrapper[4853]: I1209 18:33:05.417746 4853 scope.go:117] "RemoveContainer" containerID="11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a" Dec 09 18:33:05 crc kubenswrapper[4853]: I1209 18:33:05.420497 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t65zx" event={"ID":"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6","Type":"ContainerStarted","Data":"89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6"} Dec 09 18:33:05 crc kubenswrapper[4853]: I1209 18:33:05.426555 4853 generic.go:334] "Generic (PLEG): container finished" podID="fad98d15-2a08-4417-828e-c8529374012b" containerID="61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf" exitCode=0 Dec 09 18:33:05 crc kubenswrapper[4853]: I1209 18:33:05.426624 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxhhx" event={"ID":"fad98d15-2a08-4417-828e-c8529374012b","Type":"ContainerDied","Data":"61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf"} Dec 09 18:33:05 crc kubenswrapper[4853]: I1209 18:33:05.426714 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxhhx" event={"ID":"fad98d15-2a08-4417-828e-c8529374012b","Type":"ContainerStarted","Data":"4756be376abdeed9564a8390d16a937a8b6ca062ac5ee53a092a2c85993bd629"} Dec 09 18:33:05 crc kubenswrapper[4853]: I1209 18:33:05.466887 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t65zx" podStartSLOduration=3.100015666 podStartE2EDuration="6.466859368s" podCreationTimestamp="2025-12-09 18:32:59 +0000 UTC" firstStartedPulling="2025-12-09 18:33:01.362194651 +0000 UTC m=+5808.296933833" lastFinishedPulling="2025-12-09 18:33:04.729038353 +0000 UTC m=+5811.663777535" observedRunningTime="2025-12-09 18:33:05.455258084 +0000 UTC m=+5812.389997306" watchObservedRunningTime="2025-12-09 18:33:05.466859368 +0000 UTC m=+5812.401598580" Dec 09 18:33:05 crc kubenswrapper[4853]: I1209 18:33:05.813498 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k9m2p_must-gather-qnmln_23ab326c-f916-4b00-af22-bf5bdfdbc052/gather/0.log" Dec 09 18:33:06 crc kubenswrapper[4853]: I1209 18:33:06.438684 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxhhx" event={"ID":"fad98d15-2a08-4417-828e-c8529374012b","Type":"ContainerStarted","Data":"5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390"} Dec 09 18:33:07 crc kubenswrapper[4853]: I1209 18:33:07.453999 4853 generic.go:334] "Generic (PLEG): container finished" podID="fad98d15-2a08-4417-828e-c8529374012b" containerID="5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390" exitCode=0 Dec 09 18:33:07 crc kubenswrapper[4853]: I1209 18:33:07.454156 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxhhx" event={"ID":"fad98d15-2a08-4417-828e-c8529374012b","Type":"ContainerDied","Data":"5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390"} Dec 09 18:33:08 crc kubenswrapper[4853]: I1209 18:33:08.472799 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxhhx" event={"ID":"fad98d15-2a08-4417-828e-c8529374012b","Type":"ContainerStarted","Data":"de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10"} Dec 09 18:33:08 crc kubenswrapper[4853]: I1209 18:33:08.525104 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cxhhx" podStartSLOduration=3.063557523 podStartE2EDuration="5.525077699s" podCreationTimestamp="2025-12-09 18:33:03 +0000 UTC" firstStartedPulling="2025-12-09 18:33:05.430904755 +0000 UTC m=+5812.365643937" lastFinishedPulling="2025-12-09 18:33:07.892424931 +0000 UTC m=+5814.827164113" observedRunningTime="2025-12-09 18:33:08.498946232 +0000 UTC m=+5815.433685414" watchObservedRunningTime="2025-12-09 18:33:08.525077699 +0000 UTC m=+5815.459816901" Dec 09 18:33:09 crc kubenswrapper[4853]: E1209 18:33:09.569685 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:33:10 crc kubenswrapper[4853]: I1209 18:33:10.026480 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:33:10 crc kubenswrapper[4853]: I1209 18:33:10.026755 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:33:10 crc kubenswrapper[4853]: I1209 18:33:10.090985 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:33:10 crc kubenswrapper[4853]: E1209 18:33:10.571816 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:33:10 crc kubenswrapper[4853]: I1209 18:33:10.578278 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:33:11 crc kubenswrapper[4853]: I1209 18:33:11.630927 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t65zx"] Dec 09 18:33:12 crc kubenswrapper[4853]: I1209 18:33:12.522396 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t65zx" podUID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerName="registry-server" containerID="cri-o://89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6" gracePeriod=2 Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.057801 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.155644 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-utilities\") pod \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.155709 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvkf\" (UniqueName: \"kubernetes.io/projected/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-kube-api-access-9nvkf\") pod \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.155819 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-catalog-content\") pod \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\" (UID: \"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6\") " Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.156495 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-utilities" (OuterVolumeSpecName: "utilities") pod "e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" (UID: "e8c382ce-523c-40de-a6f8-6c6b3d24e0e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.157235 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.173172 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-kube-api-access-9nvkf" (OuterVolumeSpecName: "kube-api-access-9nvkf") pod "e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" (UID: "e8c382ce-523c-40de-a6f8-6c6b3d24e0e6"). InnerVolumeSpecName "kube-api-access-9nvkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.203021 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" (UID: "e8c382ce-523c-40de-a6f8-6c6b3d24e0e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.259666 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.259731 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nvkf\" (UniqueName: \"kubernetes.io/projected/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6-kube-api-access-9nvkf\") on node \"crc\" DevicePath \"\"" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.537775 4853 generic.go:334] "Generic (PLEG): container finished" podID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerID="89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6" exitCode=0 Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.537861 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t65zx" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.537864 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t65zx" event={"ID":"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6","Type":"ContainerDied","Data":"89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6"} Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.537983 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t65zx" event={"ID":"e8c382ce-523c-40de-a6f8-6c6b3d24e0e6","Type":"ContainerDied","Data":"219828dbeb2ad81b0c081123c43d10ffe3193f90a7a6849ac7c9cc8c500268eb"} Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.538055 4853 scope.go:117] "RemoveContainer" containerID="89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.574539 4853 scope.go:117] "RemoveContainer" containerID="38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.586718 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t65zx"] Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.590853 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t65zx"] Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.600032 4853 scope.go:117] "RemoveContainer" containerID="1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.649653 4853 scope.go:117] "RemoveContainer" containerID="89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6" Dec 09 18:33:13 crc kubenswrapper[4853]: E1209 18:33:13.650407 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6\": container with ID starting with 89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6 not found: ID does not exist" containerID="89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.650463 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6"} err="failed to get container status \"89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6\": rpc error: code = NotFound desc = could not find container \"89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6\": container with ID starting with 89a257a01fd433241ea34f2f99dd5ba23e8709337da9f59305ccecaab56e9ed6 not found: ID does not exist" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.650499 4853 scope.go:117] "RemoveContainer" containerID="38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c" Dec 09 18:33:13 crc kubenswrapper[4853]: E1209 18:33:13.650953 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c\": container with ID starting with 38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c not found: ID does not exist" containerID="38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.650998 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c"} err="failed to get container status \"38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c\": rpc error: code = NotFound desc = could not find container \"38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c\": container with ID starting with 38254ccf5eb78c9f6c0ec3bb9a96a0ac27fbfd9eda75b11dee4baf1382b13f2c not found: ID does not exist" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.651029 4853 scope.go:117] "RemoveContainer" containerID="1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c" Dec 09 18:33:13 crc kubenswrapper[4853]: E1209 18:33:13.651660 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c\": container with ID starting with 1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c not found: ID does not exist" containerID="1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c" Dec 09 18:33:13 crc kubenswrapper[4853]: I1209 18:33:13.651696 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c"} err="failed to get container status \"1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c\": rpc error: code = NotFound desc = could not find container \"1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c\": container with ID starting with 1c03240f121dc4055f046f16d1086f3b1e02c29e55b00c5a31b35d5857c00b3c not found: ID does not exist" Dec 09 18:33:14 crc kubenswrapper[4853]: I1209 18:33:14.199640 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:14 crc kubenswrapper[4853]: I1209 18:33:14.199717 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:14 crc kubenswrapper[4853]: E1209 18:33:14.207783 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:14 crc kubenswrapper[4853]: E1209 18:33:14.208041 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:14 crc kubenswrapper[4853]: I1209 18:33:14.255882 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:14 crc kubenswrapper[4853]: I1209 18:33:14.490500 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-k9m2p/must-gather-qnmln"] Dec 09 18:33:14 crc kubenswrapper[4853]: I1209 18:33:14.491099 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-k9m2p/must-gather-qnmln" podUID="23ab326c-f916-4b00-af22-bf5bdfdbc052" containerName="copy" containerID="cri-o://17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4" gracePeriod=2 Dec 09 18:33:14 crc kubenswrapper[4853]: I1209 18:33:14.503516 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-k9m2p/must-gather-qnmln"] Dec 09 18:33:14 crc kubenswrapper[4853]: I1209 18:33:14.618215 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.031555 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k9m2p_must-gather-qnmln_23ab326c-f916-4b00-af22-bf5bdfdbc052/copy/0.log" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.032029 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.122460 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23ab326c-f916-4b00-af22-bf5bdfdbc052-must-gather-output\") pod \"23ab326c-f916-4b00-af22-bf5bdfdbc052\" (UID: \"23ab326c-f916-4b00-af22-bf5bdfdbc052\") " Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.122571 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdgwh\" (UniqueName: \"kubernetes.io/projected/23ab326c-f916-4b00-af22-bf5bdfdbc052-kube-api-access-pdgwh\") pod \"23ab326c-f916-4b00-af22-bf5bdfdbc052\" (UID: \"23ab326c-f916-4b00-af22-bf5bdfdbc052\") " Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.129184 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ab326c-f916-4b00-af22-bf5bdfdbc052-kube-api-access-pdgwh" (OuterVolumeSpecName: "kube-api-access-pdgwh") pod "23ab326c-f916-4b00-af22-bf5bdfdbc052" (UID: "23ab326c-f916-4b00-af22-bf5bdfdbc052"). InnerVolumeSpecName "kube-api-access-pdgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.226076 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdgwh\" (UniqueName: \"kubernetes.io/projected/23ab326c-f916-4b00-af22-bf5bdfdbc052-kube-api-access-pdgwh\") on node \"crc\" DevicePath \"\"" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.265537 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ab326c-f916-4b00-af22-bf5bdfdbc052-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "23ab326c-f916-4b00-af22-bf5bdfdbc052" (UID: "23ab326c-f916-4b00-af22-bf5bdfdbc052"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.328707 4853 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23ab326c-f916-4b00-af22-bf5bdfdbc052-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.560835 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-k9m2p_must-gather-qnmln_23ab326c-f916-4b00-af22-bf5bdfdbc052/copy/0.log" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.561254 4853 generic.go:334] "Generic (PLEG): container finished" podID="23ab326c-f916-4b00-af22-bf5bdfdbc052" containerID="17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4" exitCode=143 Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.562285 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k9m2p/must-gather-qnmln" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.562480 4853 scope.go:117] "RemoveContainer" containerID="17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.586129 4853 scope.go:117] "RemoveContainer" containerID="11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.620963 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ab326c-f916-4b00-af22-bf5bdfdbc052" path="/var/lib/kubelet/pods/23ab326c-f916-4b00-af22-bf5bdfdbc052/volumes" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.622117 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" path="/var/lib/kubelet/pods/e8c382ce-523c-40de-a6f8-6c6b3d24e0e6/volumes" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.696567 4853 scope.go:117] "RemoveContainer" containerID="17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4" Dec 09 18:33:15 crc kubenswrapper[4853]: E1209 18:33:15.697368 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4\": container with ID starting with 17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4 not found: ID does not exist" containerID="17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.697413 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4"} err="failed to get container status \"17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4\": rpc error: code = NotFound desc = could not find container \"17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4\": container with ID starting with 17587f1edd197464da042b35473e8f740a0edce63063b28fd3d932ef590459f4 not found: ID does not exist" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.697435 4853 scope.go:117] "RemoveContainer" containerID="11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a" Dec 09 18:33:15 crc kubenswrapper[4853]: E1209 18:33:15.697920 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a\": container with ID starting with 11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a not found: ID does not exist" containerID="11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a" Dec 09 18:33:15 crc kubenswrapper[4853]: I1209 18:33:15.697957 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a"} err="failed to get container status \"11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a\": rpc error: code = NotFound desc = could not find container \"11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a\": container with ID starting with 11358e9724aee92aa9fb627aef1c57508368bce1abd743d84ff682da262dd91a not found: ID does not exist" Dec 09 18:33:16 crc kubenswrapper[4853]: I1209 18:33:16.213779 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxhhx"] Dec 09 18:33:16 crc kubenswrapper[4853]: I1209 18:33:16.574111 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cxhhx" podUID="fad98d15-2a08-4417-828e-c8529374012b" containerName="registry-server" containerID="cri-o://de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10" gracePeriod=2 Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.085242 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.177433 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-utilities\") pod \"fad98d15-2a08-4417-828e-c8529374012b\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.177782 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnfwk\" (UniqueName: \"kubernetes.io/projected/fad98d15-2a08-4417-828e-c8529374012b-kube-api-access-tnfwk\") pod \"fad98d15-2a08-4417-828e-c8529374012b\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.177834 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-catalog-content\") pod \"fad98d15-2a08-4417-828e-c8529374012b\" (UID: \"fad98d15-2a08-4417-828e-c8529374012b\") " Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.178728 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-utilities" (OuterVolumeSpecName: "utilities") pod "fad98d15-2a08-4417-828e-c8529374012b" (UID: "fad98d15-2a08-4417-828e-c8529374012b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.185758 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad98d15-2a08-4417-828e-c8529374012b-kube-api-access-tnfwk" (OuterVolumeSpecName: "kube-api-access-tnfwk") pod "fad98d15-2a08-4417-828e-c8529374012b" (UID: "fad98d15-2a08-4417-828e-c8529374012b"). InnerVolumeSpecName "kube-api-access-tnfwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.208544 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fad98d15-2a08-4417-828e-c8529374012b" (UID: "fad98d15-2a08-4417-828e-c8529374012b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.280418 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnfwk\" (UniqueName: \"kubernetes.io/projected/fad98d15-2a08-4417-828e-c8529374012b-kube-api-access-tnfwk\") on node \"crc\" DevicePath \"\"" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.280466 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.280484 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad98d15-2a08-4417-828e-c8529374012b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.587762 4853 generic.go:334] "Generic (PLEG): container finished" podID="fad98d15-2a08-4417-828e-c8529374012b" containerID="de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10" exitCode=0 Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.588136 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cxhhx" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.588359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxhhx" event={"ID":"fad98d15-2a08-4417-828e-c8529374012b","Type":"ContainerDied","Data":"de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10"} Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.588410 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cxhhx" event={"ID":"fad98d15-2a08-4417-828e-c8529374012b","Type":"ContainerDied","Data":"4756be376abdeed9564a8390d16a937a8b6ca062ac5ee53a092a2c85993bd629"} Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.588429 4853 scope.go:117] "RemoveContainer" containerID="de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.622144 4853 scope.go:117] "RemoveContainer" containerID="5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.626248 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxhhx"] Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.636478 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cxhhx"] Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.665079 4853 scope.go:117] "RemoveContainer" containerID="61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.693539 4853 scope.go:117] "RemoveContainer" containerID="de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10" Dec 09 18:33:17 crc kubenswrapper[4853]: E1209 18:33:17.695089 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10\": container with ID starting with de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10 not found: ID does not exist" containerID="de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.695140 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10"} err="failed to get container status \"de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10\": rpc error: code = NotFound desc = could not find container \"de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10\": container with ID starting with de5f6f926b8fd836491623e2b1ca46010bb77acec983f4df7f239071b64e2d10 not found: ID does not exist" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.695171 4853 scope.go:117] "RemoveContainer" containerID="5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390" Dec 09 18:33:17 crc kubenswrapper[4853]: E1209 18:33:17.695877 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390\": container with ID starting with 5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390 not found: ID does not exist" containerID="5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.695937 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390"} err="failed to get container status \"5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390\": rpc error: code = NotFound desc = could not find container \"5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390\": container with ID starting with 5b89c51eb7f0bcbaca6f396942971c6c1a54fd69602424bc256434ab2d061390 not found: ID does not exist" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.695972 4853 scope.go:117] "RemoveContainer" containerID="61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf" Dec 09 18:33:17 crc kubenswrapper[4853]: E1209 18:33:17.696471 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf\": container with ID starting with 61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf not found: ID does not exist" containerID="61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf" Dec 09 18:33:17 crc kubenswrapper[4853]: I1209 18:33:17.696514 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf"} err="failed to get container status \"61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf\": rpc error: code = NotFound desc = could not find container \"61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf\": container with ID starting with 61484b5cf56f8f469c370ff957a6da04bdc5b861fcc3df8817b6a3d82a5462bf not found: ID does not exist" Dec 09 18:33:19 crc kubenswrapper[4853]: I1209 18:33:19.579505 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad98d15-2a08-4417-828e-c8529374012b" path="/var/lib/kubelet/pods/fad98d15-2a08-4417-828e-c8529374012b/volumes" Dec 09 18:33:20 crc kubenswrapper[4853]: E1209 18:33:20.569261 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:33:21 crc kubenswrapper[4853]: E1209 18:33:21.570462 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:33:24 crc kubenswrapper[4853]: E1209 18:33:24.534726 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:28 crc kubenswrapper[4853]: I1209 18:33:28.593228 4853 patch_prober.go:28] interesting pod/machine-config-daemon-kwsj4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 18:33:28 crc kubenswrapper[4853]: I1209 18:33:28.594010 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 18:33:28 crc kubenswrapper[4853]: I1209 18:33:28.594320 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" Dec 09 18:33:28 crc kubenswrapper[4853]: I1209 18:33:28.595778 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b"} pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 18:33:28 crc kubenswrapper[4853]: I1209 18:33:28.595855 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerName="machine-config-daemon" containerID="cri-o://cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" gracePeriod=600 Dec 09 18:33:28 crc kubenswrapper[4853]: I1209 18:33:28.766111 4853 generic.go:334] "Generic (PLEG): container finished" podID="1e036ba1-c8bd-48d7-bd93-71993300b60f" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" exitCode=0 Dec 09 18:33:28 crc kubenswrapper[4853]: I1209 18:33:28.766158 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerDied","Data":"cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b"} Dec 09 18:33:28 crc kubenswrapper[4853]: I1209 18:33:28.766190 4853 scope.go:117] "RemoveContainer" containerID="405135359b7635f5e67016d3c1cff75d7b75670dbb5c667e67e4591d6717b5c7" Dec 09 18:33:28 crc kubenswrapper[4853]: E1209 18:33:28.805963 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:33:29 crc kubenswrapper[4853]: E1209 18:33:29.074106 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:29 crc kubenswrapper[4853]: I1209 18:33:29.776942 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:33:29 crc kubenswrapper[4853]: E1209 18:33:29.777282 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:33:32 crc kubenswrapper[4853]: I1209 18:33:32.052676 4853 scope.go:117] "RemoveContainer" containerID="ad62174e39ad3cf63b9411c9d33a9b3a0b33413feb2daa3a7bdbb35cfd9fc1e1" Dec 09 18:33:32 crc kubenswrapper[4853]: I1209 18:33:32.136091 4853 scope.go:117] "RemoveContainer" containerID="93439d86b7905f22bff6f88c2e7f8e3874030783abef6e75be8ed0cf7521bdf6" Dec 09 18:33:32 crc kubenswrapper[4853]: I1209 18:33:32.175913 4853 scope.go:117] "RemoveContainer" containerID="fadf5c9662de8239d72afeed7c3159f96516bf875f2e0c938eb9b53ee98a5f87" Dec 09 18:33:34 crc kubenswrapper[4853]: E1209 18:33:34.616051 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:33:34 crc kubenswrapper[4853]: E1209 18:33:34.931311 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:35 crc kubenswrapper[4853]: E1209 18:33:35.570113 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:33:41 crc kubenswrapper[4853]: I1209 18:33:41.568198 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:33:41 crc kubenswrapper[4853]: E1209 18:33:41.570674 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:33:44 crc kubenswrapper[4853]: E1209 18:33:44.342872 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:44 crc kubenswrapper[4853]: E1209 18:33:44.997078 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:45 crc kubenswrapper[4853]: E1209 18:33:45.569483 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:33:48 crc kubenswrapper[4853]: E1209 18:33:48.107208 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:48 crc kubenswrapper[4853]: E1209 18:33:48.108151 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:48 crc kubenswrapper[4853]: E1209 18:33:48.570729 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:33:53 crc kubenswrapper[4853]: I1209 18:33:53.595588 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:33:53 crc kubenswrapper[4853]: E1209 18:33:53.596353 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:33:55 crc kubenswrapper[4853]: E1209 18:33:55.336783 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:33:59 crc kubenswrapper[4853]: E1209 18:33:59.076510 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:34:00 crc kubenswrapper[4853]: E1209 18:34:00.571267 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:34:03 crc kubenswrapper[4853]: E1209 18:34:03.584571 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:34:04 crc kubenswrapper[4853]: I1209 18:34:04.568420 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:34:04 crc kubenswrapper[4853]: E1209 18:34:04.569160 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:34:05 crc kubenswrapper[4853]: E1209 18:34:05.397454 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c382ce_523c_40de_a6f8_6c6b3d24e0e6.slice\": RecentStats: unable to find data in memory cache]" Dec 09 18:34:12 crc kubenswrapper[4853]: E1209 18:34:12.568993 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:34:14 crc kubenswrapper[4853]: E1209 18:34:14.569885 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:34:16 crc kubenswrapper[4853]: I1209 18:34:16.567721 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:34:16 crc kubenswrapper[4853]: E1209 18:34:16.568378 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:34:24 crc kubenswrapper[4853]: E1209 18:34:24.570921 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:34:25 crc kubenswrapper[4853]: E1209 18:34:25.569054 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:34:28 crc kubenswrapper[4853]: I1209 18:34:28.568942 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:34:28 crc kubenswrapper[4853]: E1209 18:34:28.569776 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:34:37 crc kubenswrapper[4853]: E1209 18:34:37.569437 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:34:38 crc kubenswrapper[4853]: E1209 18:34:38.570046 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:34:41 crc kubenswrapper[4853]: I1209 18:34:41.568266 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:34:41 crc kubenswrapper[4853]: E1209 18:34:41.569583 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:34:48 crc kubenswrapper[4853]: E1209 18:34:48.569831 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:34:52 crc kubenswrapper[4853]: E1209 18:34:52.570056 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:34:54 crc kubenswrapper[4853]: I1209 18:34:54.567712 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:34:54 crc kubenswrapper[4853]: E1209 18:34:54.568887 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:35:03 crc kubenswrapper[4853]: E1209 18:35:03.579659 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:35:06 crc kubenswrapper[4853]: E1209 18:35:06.569901 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:35:08 crc kubenswrapper[4853]: I1209 18:35:08.568342 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:35:08 crc kubenswrapper[4853]: E1209 18:35:08.569067 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:35:15 crc kubenswrapper[4853]: E1209 18:35:15.569071 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:35:18 crc kubenswrapper[4853]: E1209 18:35:18.571532 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:35:20 crc kubenswrapper[4853]: I1209 18:35:20.567661 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:35:20 crc kubenswrapper[4853]: E1209 18:35:20.568579 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:35:26 crc kubenswrapper[4853]: E1209 18:35:26.570257 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:35:31 crc kubenswrapper[4853]: I1209 18:35:31.568011 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:35:31 crc kubenswrapper[4853]: E1209 18:35:31.568973 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:35:33 crc kubenswrapper[4853]: E1209 18:35:33.588646 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:35:38 crc kubenswrapper[4853]: E1209 18:35:38.571205 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:35:46 crc kubenswrapper[4853]: I1209 18:35:46.567627 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:35:46 crc kubenswrapper[4853]: E1209 18:35:46.569140 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:35:48 crc kubenswrapper[4853]: E1209 18:35:48.570302 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:35:51 crc kubenswrapper[4853]: E1209 18:35:51.570547 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:35:59 crc kubenswrapper[4853]: E1209 18:35:59.574027 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:36:01 crc kubenswrapper[4853]: I1209 18:36:01.567436 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:36:01 crc kubenswrapper[4853]: E1209 18:36:01.568153 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:36:05 crc kubenswrapper[4853]: E1209 18:36:05.571766 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:36:12 crc kubenswrapper[4853]: I1209 18:36:12.567981 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:36:12 crc kubenswrapper[4853]: E1209 18:36:12.569311 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:36:14 crc kubenswrapper[4853]: E1209 18:36:14.570524 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:36:18 crc kubenswrapper[4853]: E1209 18:36:18.571665 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:36:25 crc kubenswrapper[4853]: I1209 18:36:25.569491 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:36:25 crc kubenswrapper[4853]: E1209 18:36:25.570342 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:36:27 crc kubenswrapper[4853]: E1209 18:36:27.570877 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:36:31 crc kubenswrapper[4853]: E1209 18:36:31.569935 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:36:39 crc kubenswrapper[4853]: I1209 18:36:39.568181 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:36:39 crc kubenswrapper[4853]: E1209 18:36:39.569211 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:36:39 crc kubenswrapper[4853]: E1209 18:36:39.569702 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:36:45 crc kubenswrapper[4853]: E1209 18:36:45.571080 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:36:52 crc kubenswrapper[4853]: E1209 18:36:52.569559 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:36:53 crc kubenswrapper[4853]: I1209 18:36:53.590317 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:36:53 crc kubenswrapper[4853]: E1209 18:36:53.591489 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:36:59 crc kubenswrapper[4853]: E1209 18:36:59.569380 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:37:04 crc kubenswrapper[4853]: I1209 18:37:04.569007 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:37:04 crc kubenswrapper[4853]: E1209 18:37:04.572489 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:37:05 crc kubenswrapper[4853]: I1209 18:37:05.570647 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 18:37:05 crc kubenswrapper[4853]: E1209 18:37:05.710117 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:37:05 crc kubenswrapper[4853]: E1209 18:37:05.710177 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 18:37:05 crc kubenswrapper[4853]: E1209 18:37:05.710298 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-52zlg_openstack(3819bec9-a99d-4c1a-a387-3f0dff9f4b1d): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:37:05 crc kubenswrapper[4853]: E1209 18:37:05.711433 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:37:12 crc kubenswrapper[4853]: E1209 18:37:12.569328 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:37:16 crc kubenswrapper[4853]: I1209 18:37:16.568081 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:37:16 crc kubenswrapper[4853]: E1209 18:37:16.568869 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:37:18 crc kubenswrapper[4853]: E1209 18:37:18.571832 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:37:27 crc kubenswrapper[4853]: E1209 18:37:27.693839 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:37:27 crc kubenswrapper[4853]: E1209 18:37:27.694399 4853 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 18:37:27 crc kubenswrapper[4853]: E1209 18:37:27.694833 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h5dch9fh9h58bh598h9bh56fh96h679h674h568h557h559hd8h5d5h65h5fhb9h579h59dhfh597hd7h58fhcdh5cch5bfh59h5f6h57fh6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-558g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e815965-15fe-4f84-8eb4-133f91163a08): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 18:37:27 crc kubenswrapper[4853]: E1209 18:37:27.695990 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:37:31 crc kubenswrapper[4853]: I1209 18:37:31.568633 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:37:31 crc kubenswrapper[4853]: E1209 18:37:31.569283 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:37:32 crc kubenswrapper[4853]: E1209 18:37:32.568452 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.779637 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sb6ps"] Dec 09 18:37:35 crc kubenswrapper[4853]: E1209 18:37:35.780561 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerName="extract-utilities" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.780579 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerName="extract-utilities" Dec 09 18:37:35 crc kubenswrapper[4853]: E1209 18:37:35.780628 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ab326c-f916-4b00-af22-bf5bdfdbc052" containerName="gather" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.780640 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ab326c-f916-4b00-af22-bf5bdfdbc052" containerName="gather" Dec 09 18:37:35 crc kubenswrapper[4853]: E1209 18:37:35.780655 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad98d15-2a08-4417-828e-c8529374012b" containerName="extract-content" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.780664 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad98d15-2a08-4417-828e-c8529374012b" containerName="extract-content" Dec 09 18:37:35 crc kubenswrapper[4853]: E1209 18:37:35.780679 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerName="extract-content" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.780686 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerName="extract-content" Dec 09 18:37:35 crc kubenswrapper[4853]: E1209 18:37:35.780705 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad98d15-2a08-4417-828e-c8529374012b" containerName="extract-utilities" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.780714 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad98d15-2a08-4417-828e-c8529374012b" containerName="extract-utilities" Dec 09 18:37:35 crc kubenswrapper[4853]: E1209 18:37:35.780735 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad98d15-2a08-4417-828e-c8529374012b" containerName="registry-server" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.780743 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad98d15-2a08-4417-828e-c8529374012b" containerName="registry-server" Dec 09 18:37:35 crc kubenswrapper[4853]: E1209 18:37:35.780763 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerName="registry-server" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.780770 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerName="registry-server" Dec 09 18:37:35 crc kubenswrapper[4853]: E1209 18:37:35.780782 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ab326c-f916-4b00-af22-bf5bdfdbc052" containerName="copy" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.780789 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ab326c-f916-4b00-af22-bf5bdfdbc052" containerName="copy" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.781079 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad98d15-2a08-4417-828e-c8529374012b" containerName="registry-server" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.781107 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ab326c-f916-4b00-af22-bf5bdfdbc052" containerName="copy" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.781131 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c382ce-523c-40de-a6f8-6c6b3d24e0e6" containerName="registry-server" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.781151 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ab326c-f916-4b00-af22-bf5bdfdbc052" containerName="gather" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.783361 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.793796 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sb6ps"] Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.893914 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwhqs\" (UniqueName: \"kubernetes.io/projected/3570ff22-2d30-4e3f-978b-ba51d7d2c666-kube-api-access-dwhqs\") pod \"redhat-operators-sb6ps\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.894285 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-utilities\") pod \"redhat-operators-sb6ps\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.894539 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-catalog-content\") pod \"redhat-operators-sb6ps\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.996815 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-catalog-content\") pod \"redhat-operators-sb6ps\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.997264 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwhqs\" (UniqueName: \"kubernetes.io/projected/3570ff22-2d30-4e3f-978b-ba51d7d2c666-kube-api-access-dwhqs\") pod \"redhat-operators-sb6ps\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.997450 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-catalog-content\") pod \"redhat-operators-sb6ps\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.997638 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-utilities\") pod \"redhat-operators-sb6ps\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:35 crc kubenswrapper[4853]: I1209 18:37:35.998057 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-utilities\") pod \"redhat-operators-sb6ps\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:36 crc kubenswrapper[4853]: I1209 18:37:36.025725 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwhqs\" (UniqueName: \"kubernetes.io/projected/3570ff22-2d30-4e3f-978b-ba51d7d2c666-kube-api-access-dwhqs\") pod \"redhat-operators-sb6ps\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:36 crc kubenswrapper[4853]: I1209 18:37:36.104013 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:36 crc kubenswrapper[4853]: I1209 18:37:36.640740 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sb6ps"] Dec 09 18:37:37 crc kubenswrapper[4853]: I1209 18:37:37.198148 4853 generic.go:334] "Generic (PLEG): container finished" podID="3570ff22-2d30-4e3f-978b-ba51d7d2c666" containerID="06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b" exitCode=0 Dec 09 18:37:37 crc kubenswrapper[4853]: I1209 18:37:37.198524 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb6ps" event={"ID":"3570ff22-2d30-4e3f-978b-ba51d7d2c666","Type":"ContainerDied","Data":"06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b"} Dec 09 18:37:37 crc kubenswrapper[4853]: I1209 18:37:37.198572 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb6ps" event={"ID":"3570ff22-2d30-4e3f-978b-ba51d7d2c666","Type":"ContainerStarted","Data":"e59175e66ecfdc85583a3b005b1d5c999ddf08771b4ee80ead07c30294b49e5b"} Dec 09 18:37:38 crc kubenswrapper[4853]: I1209 18:37:38.215727 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb6ps" event={"ID":"3570ff22-2d30-4e3f-978b-ba51d7d2c666","Type":"ContainerStarted","Data":"fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324"} Dec 09 18:37:42 crc kubenswrapper[4853]: I1209 18:37:42.263961 4853 generic.go:334] "Generic (PLEG): container finished" podID="3570ff22-2d30-4e3f-978b-ba51d7d2c666" containerID="fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324" exitCode=0 Dec 09 18:37:42 crc kubenswrapper[4853]: I1209 18:37:42.264171 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb6ps" event={"ID":"3570ff22-2d30-4e3f-978b-ba51d7d2c666","Type":"ContainerDied","Data":"fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324"} Dec 09 18:37:42 crc kubenswrapper[4853]: E1209 18:37:42.570736 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:37:44 crc kubenswrapper[4853]: I1209 18:37:44.293193 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb6ps" event={"ID":"3570ff22-2d30-4e3f-978b-ba51d7d2c666","Type":"ContainerStarted","Data":"ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216"} Dec 09 18:37:44 crc kubenswrapper[4853]: I1209 18:37:44.329809 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sb6ps" podStartSLOduration=3.303153785 podStartE2EDuration="9.32978253s" podCreationTimestamp="2025-12-09 18:37:35 +0000 UTC" firstStartedPulling="2025-12-09 18:37:37.200033557 +0000 UTC m=+6084.134772739" lastFinishedPulling="2025-12-09 18:37:43.226662262 +0000 UTC m=+6090.161401484" observedRunningTime="2025-12-09 18:37:44.317753105 +0000 UTC m=+6091.252492297" watchObservedRunningTime="2025-12-09 18:37:44.32978253 +0000 UTC m=+6091.264521722" Dec 09 18:37:44 crc kubenswrapper[4853]: I1209 18:37:44.568139 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:37:44 crc kubenswrapper[4853]: E1209 18:37:44.568805 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:37:44 crc kubenswrapper[4853]: E1209 18:37:44.570947 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:37:46 crc kubenswrapper[4853]: I1209 18:37:46.105003 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:46 crc kubenswrapper[4853]: I1209 18:37:46.105338 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:47 crc kubenswrapper[4853]: I1209 18:37:47.169400 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sb6ps" podUID="3570ff22-2d30-4e3f-978b-ba51d7d2c666" containerName="registry-server" probeResult="failure" output=< Dec 09 18:37:47 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Dec 09 18:37:47 crc kubenswrapper[4853]: > Dec 09 18:37:54 crc kubenswrapper[4853]: E1209 18:37:54.569308 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:37:55 crc kubenswrapper[4853]: I1209 18:37:55.568117 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:37:55 crc kubenswrapper[4853]: E1209 18:37:55.568881 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:37:55 crc kubenswrapper[4853]: E1209 18:37:55.570182 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:37:56 crc kubenswrapper[4853]: I1209 18:37:56.175191 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:56 crc kubenswrapper[4853]: I1209 18:37:56.241572 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:56 crc kubenswrapper[4853]: I1209 18:37:56.425574 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sb6ps"] Dec 09 18:37:57 crc kubenswrapper[4853]: I1209 18:37:57.462845 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sb6ps" podUID="3570ff22-2d30-4e3f-978b-ba51d7d2c666" containerName="registry-server" containerID="cri-o://ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216" gracePeriod=2 Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.009091 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.147805 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-catalog-content\") pod \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.148178 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwhqs\" (UniqueName: \"kubernetes.io/projected/3570ff22-2d30-4e3f-978b-ba51d7d2c666-kube-api-access-dwhqs\") pod \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.148575 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-utilities\") pod \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\" (UID: \"3570ff22-2d30-4e3f-978b-ba51d7d2c666\") " Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.149817 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-utilities" (OuterVolumeSpecName: "utilities") pod "3570ff22-2d30-4e3f-978b-ba51d7d2c666" (UID: "3570ff22-2d30-4e3f-978b-ba51d7d2c666"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.160765 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3570ff22-2d30-4e3f-978b-ba51d7d2c666-kube-api-access-dwhqs" (OuterVolumeSpecName: "kube-api-access-dwhqs") pod "3570ff22-2d30-4e3f-978b-ba51d7d2c666" (UID: "3570ff22-2d30-4e3f-978b-ba51d7d2c666"). InnerVolumeSpecName "kube-api-access-dwhqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.251462 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwhqs\" (UniqueName: \"kubernetes.io/projected/3570ff22-2d30-4e3f-978b-ba51d7d2c666-kube-api-access-dwhqs\") on node \"crc\" DevicePath \"\"" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.251887 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.263477 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3570ff22-2d30-4e3f-978b-ba51d7d2c666" (UID: "3570ff22-2d30-4e3f-978b-ba51d7d2c666"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.354211 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3570ff22-2d30-4e3f-978b-ba51d7d2c666-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.480818 4853 generic.go:334] "Generic (PLEG): container finished" podID="3570ff22-2d30-4e3f-978b-ba51d7d2c666" containerID="ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216" exitCode=0 Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.480893 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb6ps" event={"ID":"3570ff22-2d30-4e3f-978b-ba51d7d2c666","Type":"ContainerDied","Data":"ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216"} Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.480907 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sb6ps" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.480933 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sb6ps" event={"ID":"3570ff22-2d30-4e3f-978b-ba51d7d2c666","Type":"ContainerDied","Data":"e59175e66ecfdc85583a3b005b1d5c999ddf08771b4ee80ead07c30294b49e5b"} Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.480965 4853 scope.go:117] "RemoveContainer" containerID="ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.519215 4853 scope.go:117] "RemoveContainer" containerID="fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.525063 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sb6ps"] Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.534336 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sb6ps"] Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.562876 4853 scope.go:117] "RemoveContainer" containerID="06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.622252 4853 scope.go:117] "RemoveContainer" containerID="ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216" Dec 09 18:37:58 crc kubenswrapper[4853]: E1209 18:37:58.622882 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216\": container with ID starting with ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216 not found: ID does not exist" containerID="ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.622943 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216"} err="failed to get container status \"ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216\": rpc error: code = NotFound desc = could not find container \"ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216\": container with ID starting with ddde3148f34b102292a89e6951f77ee3b0460eb1186a6e6a1cdad940bfafb216 not found: ID does not exist" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.622982 4853 scope.go:117] "RemoveContainer" containerID="fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324" Dec 09 18:37:58 crc kubenswrapper[4853]: E1209 18:37:58.623502 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324\": container with ID starting with fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324 not found: ID does not exist" containerID="fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.623551 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324"} err="failed to get container status \"fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324\": rpc error: code = NotFound desc = could not find container \"fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324\": container with ID starting with fb997fda1b456e8568c3163331475213ec2e62ae038cea2f63f7e14c0bedc324 not found: ID does not exist" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.623583 4853 scope.go:117] "RemoveContainer" containerID="06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b" Dec 09 18:37:58 crc kubenswrapper[4853]: E1209 18:37:58.624009 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b\": container with ID starting with 06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b not found: ID does not exist" containerID="06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b" Dec 09 18:37:58 crc kubenswrapper[4853]: I1209 18:37:58.624082 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b"} err="failed to get container status \"06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b\": rpc error: code = NotFound desc = could not find container \"06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b\": container with ID starting with 06516c5e4e870a7838fdfc12c9f7e557f27e98ba3582b13bbbfb452a5dd5a15b not found: ID does not exist" Dec 09 18:37:59 crc kubenswrapper[4853]: I1209 18:37:59.585282 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3570ff22-2d30-4e3f-978b-ba51d7d2c666" path="/var/lib/kubelet/pods/3570ff22-2d30-4e3f-978b-ba51d7d2c666/volumes" Dec 09 18:38:05 crc kubenswrapper[4853]: E1209 18:38:05.572480 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:38:07 crc kubenswrapper[4853]: I1209 18:38:07.568342 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:38:07 crc kubenswrapper[4853]: E1209 18:38:07.569264 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:38:08 crc kubenswrapper[4853]: E1209 18:38:08.571114 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:38:17 crc kubenswrapper[4853]: E1209 18:38:17.569992 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:38:18 crc kubenswrapper[4853]: I1209 18:38:18.567342 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:38:18 crc kubenswrapper[4853]: E1209 18:38:18.568129 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kwsj4_openshift-machine-config-operator(1e036ba1-c8bd-48d7-bd93-71993300b60f)\"" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" podUID="1e036ba1-c8bd-48d7-bd93-71993300b60f" Dec 09 18:38:22 crc kubenswrapper[4853]: E1209 18:38:22.570968 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:38:28 crc kubenswrapper[4853]: E1209 18:38:28.584847 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08" Dec 09 18:38:30 crc kubenswrapper[4853]: I1209 18:38:30.568559 4853 scope.go:117] "RemoveContainer" containerID="cf1502a01fb297bf9ba1766c6d6135bf109f40b46994b643786b62e700a1963b" Dec 09 18:38:30 crc kubenswrapper[4853]: I1209 18:38:30.891913 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kwsj4" event={"ID":"1e036ba1-c8bd-48d7-bd93-71993300b60f","Type":"ContainerStarted","Data":"6b678fbd4a1697e0919a73ec6eb2f3479234c43c4f8fb47d73fe289a7afeb35f"} Dec 09 18:38:37 crc kubenswrapper[4853]: E1209 18:38:37.578210 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-52zlg" podUID="3819bec9-a99d-4c1a-a387-3f0dff9f4b1d" Dec 09 18:38:42 crc kubenswrapper[4853]: E1209 18:38:42.570337 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="6e815965-15fe-4f84-8eb4-133f91163a08"